Skip to content

Conversation

@smessmer
Copy link
Contributor

@smessmer smessmer commented Dec 5, 2018

Stack:
    :white_circle:  #14819 Implement c10::Tensor  💚
    :black_circle:  #14820 Convert caffe2/aten Tensors to/from c10  💚
    :white_circle:  #15195 Use C10Tensor in the dispatcher  💚
    :white_circle:  #15324 Fix C10_API/C10_EXPORT for op schema registration  💚
    :white_circle:  #15367 Update flat_hash_map  💚
    :white_circle:  #15199 Move LayerNorm op schema to c10  💚
    :white_circle:  #15243 Enable calling caffe2 LayerNorm from PyTorch and JIT  💚
    :white_circle:  #15312 Remove Context from c10 operator schemas  💚
    :white_circle:  #15316 Move files to/from c10/core and c10/util  💚
    :white_circle:  #15317 Clean up Half  💚
    :white_circle:  #15407 caffe2::Tensor::is_same()  💚
    :white_circle:  #15853 Move blob to c10  💛
    :white_circle:  #15854 Code style cleanup  💛
    :white_circle:  #15855 Remove some dependencies from ivalue.h to ATen  💛
    :white_circle:  #15856 Fix export macros  💛

Differential Revision: D13348044

Differential Revision: D13348044
Differential Version: 65510290
Differential Revision: D13348044
Differential Version: 65565562
Differential Revision: D13348044
Differential Version: 65592972
Differential Revision: D13348044
Differential Version: 65631701
Differential Revision: D13348044
Differential Version: 65642406
Differential Revision: D13348044
Differential Version: 66011184
Differential Revision: D13348044
Differential Version: 66055120
Differential Revision: D13348044
Differential Version: 66104934
Differential Revision: D13348044
Differential Version: 66142693
Differential Revision: D13348044
Differential Version: 66149122
Differential Revision: D13348044
Differential Version: 6616442
@dzhulgakov dzhulgakov requested review from ezyang and li-roy December 12, 2018 19:16
@dzhulgakov
Copy link
Collaborator

I'd actually prefer the conversion to be explicit instead of implicit for following reasons:

  • it involves bump of refcount in case we copy
  • we will likely have enforcement checks between caffe2::Tensor and c10::Tensor for a while (because of contiguous and non-POD support). So it's better be explicit for conversion that enforce stuff, in particular C10Tensor -> caffe2::Tensor

Differential Revision: D13348044
Differential Version: 66445919
Copy link
Contributor

@li-roy li-roy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think enforcement happens before operators like ResizeLike, and as far as I know we can't get tensors to run in a net without going through an enforcement. I do agree that this is probably a better place to check for contiguous.

Tensor(const Tensor&) = default;
Tensor(Tensor&&) = default;

/* implicit */ Tensor(C10Tensor tensor)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't remember on which diff I was leaving these comments - but I think conversion into at::Tensor or caffe2::Tensor should be enforcing invariants and thus they should be explicit

@smessmer
Copy link
Contributor Author

Sounds good to me, I made them explicit.

Differential Revision: D13348044
Differential Version: 66743747
Differential Revision: D13348044
Differential Version: 68145970
Differential Revision: D13348044
Differential Version: 68260741
Differential Revision: D13348044
Differential Version: 68276422
zdevito pushed a commit to zdevito/ATen that referenced this pull request Jan 9, 2019
Summary: Pull Request resolved: pytorch/pytorch#14820

Reviewed By: dzhulgakov

Differential Revision: D13348044

fbshipit-source-id: 95008e6ead3cfc478696b1c203769241d4cf6ca8
@soumith soumith deleted the export-D13348044 branch February 21, 2019 23:30
@ezyang ezyang added the merged label Jun 25, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants