Skip to content

Conversation

@killeent
Copy link
Contributor

@killeent killeent commented Jul 20, 2017

As title - moves to using the ATen tensor to wrap TH Tensors for use in autograd. Includes some fixes to ATen discovered by this refactor.

@ezyang
Copy link
Contributor

ezyang commented Jul 20, 2017

Mucho thanks!

ezyang referenced this pull request in ezyang/pytorch-unattached Jul 21, 2017
CC @killeent, I just sliced this off #2170.

Signed-off-by: Edward Z. Yang <[email protected]>
ezyang referenced this pull request in ezyang/pytorch-unattached Jul 24, 2017
CC @killeent, I just sliced this off #2170.

Signed-off-by: Edward Z. Yang <[email protected]>
@killeent killeent force-pushed the aten-perturbation branch 4 times, most recently from 686f119 to b656a67 Compare July 26, 2017 19:36
@killeent killeent closed this Jul 26, 2017
@killeent killeent reopened this Jul 26, 2017
ezyang referenced this pull request in ezyang/pytorch-unattached Jul 26, 2017
CC @killeent, I just sliced this off #2170.

Signed-off-by: Edward Z. Yang <[email protected]>
@killeent killeent force-pushed the aten-perturbation branch from b656a67 to f919a21 Compare July 26, 2017 20:03
Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reviewed the autograd changes and they look good. I don't know why did you remove all these std::moves. They are no longer necessary with at::Tensors, but they should still be valid, and improve performance

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

Copy link
Contributor

@zdevito zdevito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reviewed the aten-specific changes needed here and they look good to me! The sparse handling is good enough to get us started and will serve as a good base for adding more support later.

This comment was marked as off-topic.

@killeent killeent force-pushed the aten-perturbation branch 3 times, most recently from ae9be8b to 04a91ec Compare July 27, 2017 13:56
@killeent killeent force-pushed the aten-perturbation branch from 04a91ec to 0135c81 Compare July 27, 2017 14:20
@killeent killeent changed the title [WIP] [DO NOT REVIEW] aten in autograd Replace thpp::Tensor with ATen Tensor in autograd csrc Jul 27, 2017
@ezyang
Copy link
Contributor

ezyang commented Jul 27, 2017

I think the easiest way to make the Travis build pass is if we revert #2214 and investigate why Trusty gcc is flaking like this out-of-band. We've seen this error intermittently when developing on trusty in the jit branch and haven't nailed down why.

@apaszke apaszke force-pushed the aten-perturbation branch 5 times, most recently from 808fee4 to 81ff9e9 Compare July 27, 2017 19:26
@apaszke apaszke force-pushed the aten-perturbation branch from 81ff9e9 to 3e4ce98 Compare July 27, 2017 19:43
@ezyang
Copy link
Contributor

ezyang commented Jul 27, 2017

Uh oh, python 3 Jenkins has an intermittent segfault :(

@apaszke apaszke closed this Jul 28, 2017
@apaszke apaszke reopened this Jul 28, 2017
@apaszke apaszke merged commit c304d04 into pytorch:master Jul 28, 2017
ruotianluo added a commit to ruotianluo/pytorch-1 that referenced this pull request Jul 31, 2017
* commit '8262920b72374b1d9643f35057663ab02ab20330':
  Add ATen overload to AutoGPU. (pytorch#2234)
  Add comments for default value (pytorch#2242)
  Remove dead THPP code that has been replaced with ATen objects. (pytorch#2235)
  fix a bug where an uninitialized at::Tensor was passed to createPyObject (pytorch#2239)
  Replace thpp::Tensor with ATen Tensor in autograd csrc (pytorch#2170)
  Added aarch64 support (pytorch#2226)
ruotianluo added a commit to ruotianluo/pytorch-1 that referenced this pull request Jul 31, 2017
* commit '8262920b72374b1d9643f35057663ab02ab20330': (272 commits)
  Add ATen overload to AutoGPU. (pytorch#2234)
  Add comments for default value (pytorch#2242)
  Remove dead THPP code that has been replaced with ATen objects. (pytorch#2235)
  fix a bug where an uninitialized at::Tensor was passed to createPyObject (pytorch#2239)
  Replace thpp::Tensor with ATen Tensor in autograd csrc (pytorch#2170)
  Added aarch64 support (pytorch#2226)
  Increase tol. for float tensor qr big test.
  Improve Variable.retain_grad
  add `retain_grad` method, to variable, so gradient gets stored during backpop, on non-user variables
  Implement BatchNorm double backwards (pytorch#2207)
  [bugfix] in bce_with_logits logsumexp calculation (pytorch#2221)
  fix for ATen API Change
  Opt into Trusty builds. (pytorch#2214)
  allow retain to be specified for unsafeTensorFromTH
  Deduplicate THPUtils_checkLong/THPUtils_unpackLong (pytorch#2218)
  fix osx build errors related to long/int64_t
  Note [Undefined-dim versus 0-dim]
  Remove __func__ hack in auto nn.
  Enable Conv groups gradgradchecks. (pytorch#2216)
  fix a bug where some scalars were getting truncated to integers incorrectly.
  ...
ruotianluo added a commit to ruotianluo/pytorch-1 that referenced this pull request Aug 1, 2017
* commit '8262920b72374b1d9643f35057663ab02ab20330': (272 commits)
  Add ATen overload to AutoGPU. (pytorch#2234)
  Add comments for default value (pytorch#2242)
  Remove dead THPP code that has been replaced with ATen objects. (pytorch#2235)
  fix a bug where an uninitialized at::Tensor was passed to createPyObject (pytorch#2239)
  Replace thpp::Tensor with ATen Tensor in autograd csrc (pytorch#2170)
  Added aarch64 support (pytorch#2226)
  Increase tol. for float tensor qr big test.
  Improve Variable.retain_grad
  add `retain_grad` method, to variable, so gradient gets stored during backpop, on non-user variables
  Implement BatchNorm double backwards (pytorch#2207)
  [bugfix] in bce_with_logits logsumexp calculation (pytorch#2221)
  fix for ATen API Change
  Opt into Trusty builds. (pytorch#2214)
  allow retain to be specified for unsafeTensorFromTH
  Deduplicate THPUtils_checkLong/THPUtils_unpackLong (pytorch#2218)
  fix osx build errors related to long/int64_t
  Note [Undefined-dim versus 0-dim]
  Remove __func__ hack in auto nn.
  Enable Conv groups gradgradchecks. (pytorch#2216)
  fix a bug where some scalars were getting truncated to integers incorrectly.
  ...
houseroad added a commit to houseroad/pytorch that referenced this pull request Jul 17, 2019
…22d5dd

Summary:
Previous import was 806aa863020fa180e57f576cb032ec44ce8ddcca

Included changes:
- **[70706498](onnx/onnx@70706498)**: TensorProto::INT8 & INT16 were missed here (pytorch#2164) <ZINEKS>
- **[8218a4ea](onnx/onnx@8218a4ea)**: Fix LabelEncoder's shape inference (pytorch#2170) <Wei-Sheng Chin>
- **[0f1a9a1c](onnx/onnx@0f1a9a1c)**: Fixing a unit test in Cumsum Operator (pytorch#2157) <Jeff Saremi>
- **[2c03cff0](onnx/onnx@2c03cff0)**: [New Operator] CumSum (pytorch#2030) <Jeff Saremi>
- **[220b8300](onnx/onnx@220b8300)**: Fix globalpool output shape (pytorch#2147) <daquexian>

Differential Revision: D16341736

fbshipit-source-id: fcd136f20eadfd3a78ae2cd928a4d6eb7c709f4c
facebook-github-bot pushed a commit that referenced this pull request Jul 17, 2019
…22d5dd (#22981)

Summary:
Pull Request resolved: #22981

Previous import was 806aa863020fa180e57f576cb032ec44ce8ddcca

Included changes:
- **[70706498](onnx/onnx@70706498)**: TensorProto::INT8 & INT16 were missed here (#2164) <ZINEKS>
- **[8218a4ea](onnx/onnx@8218a4ea)**: Fix LabelEncoder's shape inference (#2170) <Wei-Sheng Chin>
- **[0f1a9a1c](onnx/onnx@0f1a9a1c)**: Fixing a unit test in Cumsum Operator (#2157) <Jeff Saremi>
- **[2c03cff0](onnx/onnx@2c03cff0)**: [New Operator] CumSum (#2030) <Jeff Saremi>
- **[220b8300](onnx/onnx@220b8300)**: Fix globalpool output shape (#2147) <daquexian>

Reviewed By: benoitsteiner

Differential Revision: D16341736

fbshipit-source-id: 7e7a2684d8c821991231bfd6558f9f6cb4fb05fb
azazhu pushed a commit to azazhu/pytorch that referenced this pull request Nov 12, 2022
* Refactoring of lower_alias_memory
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants