Skip to content

Conversation

@pietern
Copy link
Contributor

@pietern pietern commented Apr 29, 2019

Stack:
    :black_circle:  #19901 Finer grained consistency check in reducer  💛
    :white_circle:  #19897 Only call into reducer if torch.is_grad_enabled()  💚

During validation, gradient reduction is not needed, and autograd is
never called. The model output will always be a detached tensor. After
the new reducer was merged, this meant that it would find all model
parameters unused, and kick off reduction for them. When #19799 and
#19821 were merged it looked like model output during validation is an
output where no parameters are used and it tries to kick off reduction
of zeroed gradients. Test for torch.is_grad_enabled() and
self.training before calling into the reducer.

Differential Revision: D15118726

Differential Revision: D15118726
Differential Version: 80866244
@pytorchbot pytorchbot added oncall: distributed Add this issue/PR to distributed oncall triage queue module: nn Related to torch.nn labels Apr 29, 2019
pietern added 2 commits April 28, 2019 18:45
Differential Revision: D15118726
Differential Version: 80866256
Differential Revision: D15118726
Differential Version: 80866265
Differential Revision: D15118726
Differential Version: 80866864
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 5525c41.

@pietern pietern deleted the export-D15118726 branch April 29, 2019 16:02
soumith pushed a commit that referenced this pull request Apr 29, 2019
Summary:
Pull Request resolved: #19897

During validation, gradient reduction is not needed, and autograd is
never called. The model output will always be a detached tensor. After
the new reducer was merged, this meant that it would find all model
parameters unused, and kick off reduction for them. When #19799 and
output where no parameters are used and it tries to kick off reduction
of zeroed gradients. Test for `torch.is_grad_enabled()` and
`self.training` before calling into the reducer.

Reviewed By: mrshenli

Differential Revision: D15118726

fbshipit-source-id: b0208f632a61cbe8110fa626fa427937b7f05924
zhangguanheng66 pushed a commit to zhangguanheng66/pytorch that referenced this pull request May 6, 2019
Summary:
Pull Request resolved: pytorch#19897

During validation, gradient reduction is not needed, and autograd is
never called. The model output will always be a detached tensor. After
the new reducer was merged, this meant that it would find all model
parameters unused, and kick off reduction for them. When pytorch#19799 and
output where no parameters are used and it tries to kick off reduction
of zeroed gradients. Test for `torch.is_grad_enabled()` and
`self.training` before calling into the reducer.

Reviewed By: mrshenli

Differential Revision: D15118726

fbshipit-source-id: b0208f632a61cbe8110fa626fa427937b7f05924
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module: nn Related to torch.nn oncall: distributed Add this issue/PR to distributed oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants