Skip to content

Conversation

@li-roy
Copy link
Contributor

@li-roy li-roy commented Mar 6, 2019

Stack:
    :white_circle:  #17530 Small clean up of aten_op  💚
    :white_circle:  #17601 Store ScalarType and Backend instead of Type in TensorIterator  💚
    :white_circle:  #17785 Remove Type::elementSizeInBytes  💚
    :black_circle:  #17723 Store python default type as PyTensorType instead of at::Type  💚
    :white_circle:  #17786 Pass ScalarType separately from Type in python constructors  💚
    :white_circle:  #17792 Remove Type::ScalarType()  💚
    :white_circle:  #17603 Remove conversion operator from Type to TensorOptions  💛
    :white_circle:  #17787 Add ScalarType arg to Type::options()  💛

Differential Revision: D14351082

Differential Revision: D14351082
Differential Version: 74496475
royboy added 2 commits March 6, 2019 16:33
Differential Revision: D14351082
Differential Version: 74537074
Differential Revision: D14351082
Differential Version: 74694647
@li-roy li-roy changed the base branch from export-D14274754 to export-D14379074 March 8, 2019 00:28
royboy added 2 commits March 7, 2019 18:19
Differential Revision: D14351082
Differential Version: 74711141
Differential Revision: D14351082
Differential Version: 74714409
Differential Revision: D14351082
Differential Version: 74726648
@ezyang
Copy link
Contributor

ezyang commented Mar 8, 2019

@gchanan has wondered if we shouldn't have another type representing "backend and scalar type" (which could serve the function that Type previously served.) I'm not sure if it's necessary, but when I see you mucking around with pair<Backend, ScalarType> it seems like something to consider doing. (I'm ambivalent because it's not altogether clear why such a concept is well founded, as opposed to just Backend or TensorTypeId). What do you think?

}
}

static PyTensorType& get_tensor_type(THPDtype *dtype, THPLayout *layout, bool is_cuda) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any particular reason for this reordering? (did anything change?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

calling it in initialize_python_bindings now

royboy added 2 commits March 8, 2019 13:10
Differential Revision: D14351082
Differential Version: 74781239
Differential Revision: D14351082
Differential Version: 74811458
@li-roy
Copy link
Contributor Author

li-roy commented Mar 8, 2019

IMO the new type is unnecessary. The long term goal is to de-couple as much as possible anyways.

royboy added 3 commits March 8, 2019 14:17
Differential Revision: D14351082
Differential Version: 74818461
Differential Revision: D14351082
Differential Version: 74843004
Differential Revision: D14351082
Differential Version: 74983511
Copy link
Contributor

@gchanan gchanan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless I'm misunderstanding something, this PR seems backwards to me.

We have a useful type for storing realized options of a Tensor -- at::Type. Okay, it also does other things that we don't like, such as being used for (internal) dispatch, but that's relatively easy to fix -- use a different type for internal dispatch.

Instead, we are changing the interfaces to return a type I never want -- PyTensorType.

How feasible is it to keep at::Type is a minimal "holding" type, and introduce new dispatch types?

// match NumPy semantics, except use default tensor type instead of double.
if (length == 0) return torch::tensors::get_default_tensor_type().scalarType();
if (length == 0) {
return static_cast<ScalarType>(torch::tensors::get_default_tensor_type().scalar_type);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

having to cast this each time is pretty terrible. Can we just return the aten type from this function? I can't think of a case where I'd actually want the PyTensorType in C++ land.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm didn't think what the behavior wrt the GIL is with this -- does it change if you are now returning a PyTensorType?

@li-roy li-roy closed this Apr 16, 2019
@ezyang ezyang deleted the export-D14351082 branch May 30, 2019 15:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants