Skip to content

Conversation

@xiaomengy
Copy link
Contributor

Summary: Add autograd for layer_norm on CPU, after this diff, both PyTorch and jit model can automatically benefit from performance improvement of nn.functional.layer_norm

Differential Revision: D15483790

@pytorchbot pytorchbot added module: cpu CPU specific problem (e.g., perf, algorithm) module: internals Related to internal abstractions in c10 and ATen module: operators labels May 23, 2019
@soumith soumith requested a review from gchanan May 26, 2019 16:17
Summary:
Pull Request resolved: pytorch#20883

Add autograd for layer_norm on CPU, after this diff, both PyTorch and jit model can automatically benefit from performance improvement of nn.functional.layer_norm

Reviewed By: zheng-xq

Differential Revision: D15483790

fbshipit-source-id: 6333e050caef479085400c29af270bb8e23e80bc
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in eaa3ba6.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Jun 3, 2019
Summary:
Pull Request resolved: pytorch/pytorch#20883

Add autograd for layer_norm on CPU, after this diff, both PyTorch and jit model can automatically benefit from performance improvement of nn.functional.layer_norm

Reviewed By: zheng-xq

Differential Revision: D15483790

fbshipit-source-id: 94ed3b16ab6d83ca6c254dbcfb224ff7d88837f3
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: cpu CPU specific problem (e.g., perf, algorithm) module: internals Related to internal abstractions in c10 and ATen

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants