-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[jit] fix aten::grad to return optional list #29577
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary: `torch.autograd.grad` can return none is one of the input is not in the autograd graph or not requires_grad, this fix it so that it return a list of optional tensor instead of list of tensor. This should not have any BC issue, because List[Tensor] :< List[Optional[Tensor]] Test Plan: Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Summary: `torch.autograd.grad` can return none is one of the input is not in the autograd graph or not requires_grad, this fix it so that it return a list of optional tensor instead of list of tensor. This should not have any BC issue, because List[Tensor] :< List[Optional[Tensor]] Test Plan: Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 3ec806f Pull Request resolved: #29577
Summary: `torch.autograd.grad` can return none is one of the input is not in the autograd graph or not requires_grad, this fix it so that it return a list of optional tensor instead of list of tensor. This should not have any BC issue, because List[Tensor] :< List[Optional[Tensor]] Test Plan: Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Summary: `torch.autograd.grad` can return none is one of the input is not in the autograd graph or not requires_grad, this fix it so that it return a list of optional tensor instead of list of tensor. This should not have any BC issue, because List[Tensor] :< List[Optional[Tensor]] Test Plan: Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 0bce7a6 Pull Request resolved: #29577
|
List[Tensor] doesn't subtype List[Optional[Tensor]] |
@eellison hmmm ok, but List[Tensor] can be passed into a function that is accepting List[Optional[Tensor]] since we try to unify the inner types, which should resolve the BC concern. |
I don't think List[Tensor] can't be passed into a function that accepts List[Optional[Tensor]] |
it can I think, the following code works: @torch.jit.script
def return_tensor_list():
# type: () -> List[Tensor]
return [torch.randn(3, 2), torch.randn(3, 2)]
@torch.jit.script
def test_use_opt_list(t_list):
# type: (List[Optional[Tensor]]) -> Optional[Tensor]
return t_list[0]
test_use_opt_list(return_tensor_list())
Out[5]:
tensor([[ 0.5223, -0.9674],
[ 0.2795, -0.4572],
[-0.0400, -1.8270]]) |
That's at the pybind layer so it's different because inputs from python don't have a well-formed type yet. Try: |
Summary: `torch.autograd.grad` can return none is one of the input is not in the autograd graph or not requires_grad, this fix it so that it return a list of optional tensor instead of list of tensor. This might have BC issue unfortunately, but I think it's rare both internal and external (only training use it, and most of the training use backward, instead of autograd.grad), so whitelist it. Test Plan: Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Summary: `torch.autograd.grad` can return none is one of the input is not in the autograd graph or not requires_grad, this fix it so that it return a list of optional tensor instead of list of tensor. This might have BC issue unfortunately, but I think it's rare both internal and external (only training use it, and most of the training use backward, instead of autograd.grad), so whitelist it. Test Plan: Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Summary: `torch.autograd.grad` can return none is one of the input is not in the autograd graph or not requires_grad, this fix it so that it return a list of optional tensor instead of list of tensor. This might have BC issue unfortunately, but I think it's rare both internal and external (only training use it, and most of the training use backward, instead of autograd.grad), so whitelist it. Test Plan: Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 19f6a87 Pull Request resolved: #29577
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should land this, but we need to be careful because it does break BC. Why is it ok to break BC?
- It is a bugfix, so the old behavior was not correct when gradients were None.
- It is unlikely to be used in any TorchScript inference models.
- grad was introduced relatively recently, so we do not expect a ton of serialized use of it.
Before landing, we should check that we are not breaking known predictor models.
Summary: `torch.autograd.grad` can return none is one of the input is not in the autograd graph or not requires_grad, this fix it so that it return a list of optional tensor instead of list of tensor. This might have BC issue unfortunately, but I think it's rare both internal and external (only training use it, and most of the training use backward, instead of autograd.grad), so whitelist it. Test Plan: Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Summary: `torch.autograd.grad` can return none is one of the input is not in the autograd graph or not requires_grad, this fix it so that it return a list of optional tensor instead of list of tensor. This might have BC issue unfortunately, but I think it's rare both internal and external (only training use it, and most of the training use backward, instead of autograd.grad), so whitelist it. Test Plan: Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: b1bd342 Pull Request resolved: #29577
Summary: `torch.autograd.grad` can return none is one of the input is not in the autograd graph or not requires_grad, this fix it so that it return a list of optional tensor instead of list of tensor. This might have BC issue unfortunately, but I think it's rare both internal and external (only training use it, and most of the training use backward, instead of autograd.grad), so whitelist it. Test Plan: Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D18491642](https://our.internmc.facebook.com/intern/diff/D18491642) [ghstack-poisoned]
Summary: `torch.autograd.grad` can return none is one of the input is not in the autograd graph or not requires_grad, this fix it so that it return a list of optional tensor instead of list of tensor. This might have BC issue unfortunately, but I think it's rare both internal and external (only training use it, and most of the training use backward, instead of autograd.grad), so whitelist it. Test Plan: Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 4f1c9e2 Pull Request resolved: #29577
Summary: `torch.autograd.grad` can return none is one of the input is not in the autograd graph or not requires_grad, this fix it so that it return a list of optional tensor instead of list of tensor. This might have BC issue unfortunately, but I think it's rare both internal and external (only training use it, and most of the training use backward, instead of autograd.grad), so whitelist it. Test Plan: Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D18491642](https://our.internmc.facebook.com/intern/diff/D18491642) [ghstack-poisoned]
Summary: `torch.autograd.grad` can return none is one of the input is not in the autograd graph or not requires_grad, this fix it so that it return a list of optional tensor instead of list of tensor. This might have BC issue unfortunately, but I think it's rare both internal and external (only training use it, and most of the training use backward, instead of autograd.grad), so whitelist it. Test Plan: Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: d46686b Pull Request resolved: #29577
Stack from ghstack:
Summary:
torch.autograd.gradcan return none is one of the input is not in theautograd graph or not requires_grad, this fix it so that it return a
list of optional tensor instead of list of tensor.
This might have BC issue unfortunately, but I think it's rare both
internal and external (only training use it, and most of the training
use backward, instead of autograd.grad), so whitelist it.
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
Differential Revision: D18491642