-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Store python default type as PyTensorType instead of at::Type #17723
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Differential Revision: D14351082 Differential Version: 74496475
Differential Revision: D14351082 Differential Version: 74537074
Differential Revision: D14351082 Differential Version: 74694647
Differential Revision: D14351082 Differential Version: 74711141
Differential Revision: D14351082 Differential Version: 74714409
Differential Revision: D14351082 Differential Version: 74726648
|
@gchanan has wondered if we shouldn't have another type representing "backend and scalar type" (which could serve the function that Type previously served.) I'm not sure if it's necessary, but when I see you mucking around with |
| } | ||
| } | ||
|
|
||
| static PyTensorType& get_tensor_type(THPDtype *dtype, THPLayout *layout, bool is_cuda) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any particular reason for this reordering? (did anything change?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
calling it in initialize_python_bindings now
Differential Revision: D14351082 Differential Version: 74781239
Differential Revision: D14351082 Differential Version: 74811458
|
IMO the new type is unnecessary. The long term goal is to de-couple as much as possible anyways. |
Differential Revision: D14351082 Differential Version: 74818461
Differential Revision: D14351082 Differential Version: 74843004
Differential Revision: D14351082 Differential Version: 74983511
gchanan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless I'm misunderstanding something, this PR seems backwards to me.
We have a useful type for storing realized options of a Tensor -- at::Type. Okay, it also does other things that we don't like, such as being used for (internal) dispatch, but that's relatively easy to fix -- use a different type for internal dispatch.
Instead, we are changing the interfaces to return a type I never want -- PyTensorType.
How feasible is it to keep at::Type is a minimal "holding" type, and introduce new dispatch types?
| // match NumPy semantics, except use default tensor type instead of double. | ||
| if (length == 0) return torch::tensors::get_default_tensor_type().scalarType(); | ||
| if (length == 0) { | ||
| return static_cast<ScalarType>(torch::tensors::get_default_tensor_type().scalar_type); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
having to cast this each time is pretty terrible. Can we just return the aten type from this function? I can't think of a case where I'd actually want the PyTensorType in C++ land.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm didn't think what the behavior wrt the GIL is with this -- does it change if you are now returning a PyTensorType?
Stack:
:white_circle: #17530 Small clean up of aten_op 💚
:white_circle: #17601 Store ScalarType and Backend instead of Type in TensorIterator 💚
:white_circle: #17785 Remove Type::elementSizeInBytes 💚
:black_circle: #17723 Store python default type as PyTensorType instead of at::Type 💚
:white_circle: #17786 Pass ScalarType separately from Type in python constructors 💚
:white_circle: #17792 Remove Type::ScalarType() 💚
:white_circle: #17603 Remove conversion operator from Type to TensorOptions 💛
:white_circle: #17787 Add ScalarType arg to Type::options() 💛
Differential Revision: D14351082