-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Convert caffe2/aten Tensors to/from c10 #14820
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Differential Revision: D13348044 Differential Version: 65510290
Differential Revision: D13348044 Differential Version: 65565562
Differential Revision: D13348044 Differential Version: 65592972
Differential Revision: D13348044 Differential Version: 65631701
Differential Revision: D13348044 Differential Version: 65642406
Differential Revision: D13348044 Differential Version: 66011184
Differential Revision: D13348044 Differential Version: 66055120
Differential Revision: D13348044 Differential Version: 66104934
Differential Revision: D13348044 Differential Version: 66142693
Differential Revision: D13348044 Differential Version: 66149122
Differential Revision: D13348044 Differential Version: 6616442
|
I'd actually prefer the conversion to be explicit instead of implicit for following reasons:
|
Differential Revision: D13348044 Differential Version: 66445919
li-roy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think enforcement happens before operators like ResizeLike, and as far as I know we can't get tensors to run in a net without going through an enforcement. I do agree that this is probably a better place to check for contiguous.
aten/src/ATen/core/Tensor.h
Outdated
| Tensor(const Tensor&) = default; | ||
| Tensor(Tensor&&) = default; | ||
|
|
||
| /* implicit */ Tensor(C10Tensor tensor) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't remember on which diff I was leaving these comments - but I think conversion into at::Tensor or caffe2::Tensor should be enforcing invariants and thus they should be explicit
|
Sounds good to me, I made them explicit. |
Differential Revision: D13348044 Differential Version: 66743747
Differential Revision: D13348044 Differential Version: 68145970
Differential Revision: D13348044 Differential Version: 68260741
Differential Revision: D13348044 Differential Version: 68276422
Summary: Pull Request resolved: pytorch/pytorch#14820 Reviewed By: dzhulgakov Differential Revision: D13348044 fbshipit-source-id: 95008e6ead3cfc478696b1c203769241d4cf6ca8
Stack:
:white_circle: #14819 Implement c10::Tensor 💚
:black_circle: #14820 Convert caffe2/aten Tensors to/from c10 💚
:white_circle: #15195 Use C10Tensor in the dispatcher 💚
:white_circle: #15324 Fix C10_API/C10_EXPORT for op schema registration 💚
:white_circle: #15367 Update flat_hash_map 💚
:white_circle: #15199 Move LayerNorm op schema to c10 💚
:white_circle: #15243 Enable calling caffe2 LayerNorm from PyTorch and JIT 💚
:white_circle: #15312 Remove Context from c10 operator schemas 💚
:white_circle: #15316 Move files to/from c10/core and c10/util 💚
:white_circle: #15317 Clean up Half 💚
:white_circle: #15407 caffe2::Tensor::is_same() 💚
:white_circle: #15853 Move blob to c10 💛
:white_circle: #15854 Code style cleanup 💛
:white_circle: #15855 Remove some dependencies from ivalue.h to ATen 💛
:white_circle: #15856 Fix export macros 💛
Differential Revision: D13348044