Skip to content

Conversation

@wanchaol
Copy link
Collaborator

@wanchaol wanchaol commented Nov 11, 2019

Stack from ghstack:

Summary:

torch.autograd.grad can return none is one of the input is not in the
autograd graph or not requires_grad, this fix it so that it return a
list of optional tensor instead of list of tensor.

This might have BC issue unfortunately, but I think it's rare both
internal and external (only training use it, and most of the training
use backward, instead of autograd.grad), so whitelist it.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: D18491642

Summary:

`torch.autograd.grad` can return none is one of the input is not in the
autograd graph or not requires_grad, this fix it so that it return a
list of optional tensor instead of list of tensor.

This should not have any BC issue, because List[Tensor] :<
List[Optional[Tensor]]

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@wanchaol wanchaol requested a review from apaszke as a code owner November 11, 2019 19:59
wanchaol added a commit that referenced this pull request Nov 11, 2019
Summary:

`torch.autograd.grad` can return none is one of the input is not in the
autograd graph or not requires_grad, this fix it so that it return a
list of optional tensor instead of list of tensor.

This should not have any BC issue, because List[Tensor] :<
List[Optional[Tensor]]

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 3ec806f
Pull Request resolved: #29577
@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Nov 11, 2019
Summary:

`torch.autograd.grad` can return none is one of the input is not in the
autograd graph or not requires_grad, this fix it so that it return a
list of optional tensor instead of list of tensor.

This should not have any BC issue, because List[Tensor] :<
List[Optional[Tensor]]

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
wanchaol added a commit that referenced this pull request Nov 11, 2019
Summary:

`torch.autograd.grad` can return none is one of the input is not in the
autograd graph or not requires_grad, this fix it so that it return a
list of optional tensor instead of list of tensor.

This should not have any BC issue, because List[Tensor] :<
List[Optional[Tensor]]

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 0bce7a6
Pull Request resolved: #29577
@wanchaol wanchaol requested review from suo and zdevito November 11, 2019 21:34
@eellison
Copy link
Contributor

List[Tensor] doesn't subtype List[Optional[Tensor]]

@wanchaol
Copy link
Collaborator Author

List[Tensor] doesn't subtype List[Optional[Tensor]]

@eellison hmmm ok, but List[Tensor] can be passed into a function that is accepting List[Optional[Tensor]] since we try to unify the inner types, which should resolve the BC concern.

@eellison
Copy link
Contributor

List[Tensor] doesn't subtype List[Optional[Tensor]]

@eellison hmmm ok, but List[Tensor] can be passed into a function that is accepting List[Optional[Tensor]] since we try to unify the inner types, which should resolve the BC concern.

I don't think List[Tensor] can't be passed into a function that accepts List[Optional[Tensor]]

@wanchaol
Copy link
Collaborator Author

List[Tensor] doesn't subtype List[Optional[Tensor]]

@eellison hmmm ok, but List[Tensor] can be passed into a function that is accepting List[Optional[Tensor]] since we try to unify the inner types, which should resolve the BC concern.

I don't think List[Tensor] can't be passed into a function that accepts List[Optional[Tensor]]

it can I think, the following code works:

@torch.jit.script 
def return_tensor_list(): 
	# type: () -> List[Tensor] 
    return [torch.randn(3, 2), torch.randn(3, 2)] 

@torch.jit.script 
def test_use_opt_list(t_list): 
    # type: (List[Optional[Tensor]]) -> Optional[Tensor] 
    return t_list[0]

test_use_opt_list(return_tensor_list()) 

Out[5]: 
tensor([[ 0.5223, -0.9674],
        [ 0.2795, -0.4572],
        [-0.0400, -1.8270]])

@eellison
Copy link
Contributor

List[Tensor] doesn't subtype List[Optional[Tensor]]

@eellison hmmm ok, but List[Tensor] can be passed into a function that is accepting List[Optional[Tensor]] since we try to unify the inner types, which should resolve the BC concern.

I don't think List[Tensor] can't be passed into a function that accepts List[Optional[Tensor]]

it can I think, the following code works:

@torch.jit.script 
def return_tensor_list(): 
	# type: () -> List[Tensor] 
    return [torch.randn(3, 2), torch.randn(3, 2)] 

@torch.jit.script 
def test_use_opt_list(t_list): 
    # type: (List[Optional[Tensor]]) -> Optional[Tensor] 
    return t_list[0]

test_use_opt_list(return_tensor_list()) 

Out[5]: 
tensor([[ 0.5223, -0.9674],
        [ 0.2795, -0.4572],
        [-0.0400, -1.8270]])

That's at the pybind layer so it's different because inputs from python don't have a well-formed type yet. Try:

@torch.jit.script 
def return_tensor_list(): 
	# type: () -> List[Tensor] 
    return [torch.randn(3, 2), torch.randn(3, 2)] 

@torch.jit.script 
def test_use_opt_list(t_list): 
    # type: (List[Optional[Tensor]]) -> Optional[Tensor] 
    return t_list[0]

@torch.jit.script
def will_fail():
    test_use_opt_list(return_tensor_list()) 

Summary:

`torch.autograd.grad` can return none is one of the input is not in the
autograd graph or not requires_grad, this fix it so that it return a
list of optional tensor instead of list of tensor.

This might have BC issue unfortunately, but I think it's rare both
internal and external (only training use it, and most of the training
use backward, instead of autograd.grad), so whitelist it.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Summary:

`torch.autograd.grad` can return none is one of the input is not in the
autograd graph or not requires_grad, this fix it so that it return a
list of optional tensor instead of list of tensor.

This might have BC issue unfortunately, but I think it's rare both
internal and external (only training use it, and most of the training
use backward, instead of autograd.grad), so whitelist it.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
wanchaol added a commit that referenced this pull request Nov 13, 2019
Summary:

`torch.autograd.grad` can return none is one of the input is not in the
autograd graph or not requires_grad, this fix it so that it return a
list of optional tensor instead of list of tensor.

This might have BC issue unfortunately, but I think it's rare both
internal and external (only training use it, and most of the training
use backward, instead of autograd.grad), so whitelist it.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 19f6a87
Pull Request resolved: #29577
Copy link
Contributor

@zdevito zdevito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should land this, but we need to be careful because it does break BC. Why is it ok to break BC?

  1. It is a bugfix, so the old behavior was not correct when gradients were None.
  2. It is unlikely to be used in any TorchScript inference models.
  3. grad was introduced relatively recently, so we do not expect a ton of serialized use of it.

Before landing, we should check that we are not breaking known predictor models.

Summary:

`torch.autograd.grad` can return none is one of the input is not in the
autograd graph or not requires_grad, this fix it so that it return a
list of optional tensor instead of list of tensor.

This might have BC issue unfortunately, but I think it's rare both
internal and external (only training use it, and most of the training
use backward, instead of autograd.grad), so whitelist it.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
wanchaol added a commit that referenced this pull request Nov 13, 2019
Summary:

`torch.autograd.grad` can return none is one of the input is not in the
autograd graph or not requires_grad, this fix it so that it return a
list of optional tensor instead of list of tensor.

This might have BC issue unfortunately, but I think it's rare both
internal and external (only training use it, and most of the training
use backward, instead of autograd.grad), so whitelist it.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: b1bd342
Pull Request resolved: #29577
Summary:

`torch.autograd.grad` can return none is one of the input is not in the
autograd graph or not requires_grad, this fix it so that it return a
list of optional tensor instead of list of tensor.

This might have BC issue unfortunately, but I think it's rare both
internal and external (only training use it, and most of the training
use backward, instead of autograd.grad), so whitelist it.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D18491642](https://our.internmc.facebook.com/intern/diff/D18491642)

[ghstack-poisoned]
wanchaol added a commit that referenced this pull request Nov 15, 2019
Summary:

`torch.autograd.grad` can return none is one of the input is not in the
autograd graph or not requires_grad, this fix it so that it return a
list of optional tensor instead of list of tensor.

This might have BC issue unfortunately, but I think it's rare both
internal and external (only training use it, and most of the training
use backward, instead of autograd.grad), so whitelist it.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 4f1c9e2
Pull Request resolved: #29577
Summary:

`torch.autograd.grad` can return none is one of the input is not in the
autograd graph or not requires_grad, this fix it so that it return a
list of optional tensor instead of list of tensor.

This might have BC issue unfortunately, but I think it's rare both
internal and external (only training use it, and most of the training
use backward, instead of autograd.grad), so whitelist it.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D18491642](https://our.internmc.facebook.com/intern/diff/D18491642)

[ghstack-poisoned]
wanchaol added a commit that referenced this pull request Nov 21, 2019
Summary:

`torch.autograd.grad` can return none is one of the input is not in the
autograd graph or not requires_grad, this fix it so that it return a
list of optional tensor instead of list of tensor.

This might have BC issue unfortunately, but I think it's rare both
internal and external (only training use it, and most of the training
use backward, instead of autograd.grad), so whitelist it.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: d46686b
Pull Request resolved: #29577
@facebook-github-bot facebook-github-bot deleted the gh/wanchaol/65/head branch December 10, 2019 15:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants