-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Change Dispatch.h to use ScalarType over Type #17527
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Differential Revision: D14233250 Differential Version: 73474786
Differential Revision: D14233250 Differential Version: 73486177
Differential Revision: D14235395 Differential Version: 73486176
Differential Revision: D14235395 Differential Version: 73589963
Differential Revision: D14235395 Differential Version: 74395307
Differential Revision: D14233250 Differential Version: 74395306
Differential Revision: D14235395 Differential Version: 74488005
Differential Revision: D14233250 Differential Version: 74537076
Differential Revision: D14235395 Differential Version: 74537072
gchanan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This made a number of error messages worse.
For example, let's say I have a CPU and CUDA implementation of the same function, fn, which I give in both cases to the dispatch macro as "fn". And as with most functions, I implement the half variant on CUDA, but not CPU.
Then, the user tries to call "fn" with CPU half, and gets the message "fn is not implemented for Half" which is wrong.
Can you just go through and find the cases where this exists (if such a case exists, note it will exist if you shepherd TH functions through this macro).
Differential Revision: D14233250 Differential Version: 74694645
Differential Revision: D14235395 Differential Version: 74694653
Differential Revision: D14235395 Differential Version: 74711130
Differential Revision: D14235395 Differential Version: 74714416
| Tensor self_c = inplace ? self : self.contiguous(); | ||
| Tensor result = inplace ? self : at::empty_like(self); | ||
| AT_DISPATCH_ALL_TYPES(self.type(), "tril", [&]{ | ||
| AT_DISPATCH_ALL_TYPES(self.scalar_type(), "tril", [&]{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tril_cpu here
| } | ||
| Tensor self_c = checkTrilTriuBatchContiguous(self) ? self : self.contiguous(); | ||
| AT_DISPATCH_ALL_TYPES(self.type(), "tril", [&]{ | ||
| AT_DISPATCH_ALL_TYPES(self.scalar_type(), "tril", [&]{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
| Tensor self_c = inplace ? self : self.contiguous(); | ||
| Tensor result = inplace ? self : at::empty_like(self); | ||
| AT_DISPATCH_ALL_TYPES(self.type(), "triu", [&]{ | ||
| AT_DISPATCH_ALL_TYPES(self.scalar_type(), "triu", [&]{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
| } | ||
| Tensor self_c = checkTrilTriuBatchContiguous(self) ? self : self.contiguous(); | ||
| AT_DISPATCH_ALL_TYPES(self.type(), "triu", [&]{ | ||
| AT_DISPATCH_ALL_TYPES(self.scalar_type(), "triu", [&]{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
| if (!in_parallel_region()) { | ||
| AT_DISPATCH_ALL_TYPES_AND( | ||
| at::ScalarType::Half, self.type(), "_copy_same_type_", [&]() { | ||
| at::ScalarType::Half, self.scalar_type(), "_copy_same_type_", [&]() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cpu
| if (serial_path) { | ||
| AT_DISPATCH_ALL_TYPES_AND( | ||
| at::ScalarType::Half, self.type(), "_copy_same_type_", [&]() { | ||
| at::ScalarType::Half, self.scalar_type(), "_copy_same_type_", [&]() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cpu
|
|
||
| AT_DISPATCH_FLOATING_TYPES(input.type(), | ||
| AT_DISPATCH_FLOATING_TYPES(input.scalar_type(), | ||
| "fractional_max_pool2d_out_frame", [&] { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cpu
| /* backprop */ | ||
| AT_DISPATCH_FLOATING_TYPES( | ||
| input.type(), "fractional_max_pool2d_backward_out_frame", [&] { | ||
| input.scalar_type(), "fractional_max_pool2d_backward_out_frame", [&] { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cpu
| AT_DISPATCH_FLOATING_TYPES( | ||
| input.type(), | ||
| input.scalar_type(), | ||
| "fractional_max_pool3d_out_frame", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cpu
ezyang
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I stopped something like 30% of the way through the patch, but it seems there are still a lot of missing sites
|
@ezyang I only changed the strings that are used on both cpu and gpu kernels. |
Differential Revision: D14233250 Differential Version: 74781242
Differential Revision: D14235395 Differential Version: 74781247
Differential Revision: D14235395 Differential Version: 74811455
Differential Revision: D14235395 Differential Version: 74818459
Summary: Pull Request resolved: pytorch/pytorch#17527 Reviewed By: zou3519 Differential Revision: D14235395 fbshipit-source-id: 3f53e33f6794f1f14c2edf79014b8ef8397822c5
Changes: 1) pytorch#17527 changed dispatch macros to be ScalarType based instead of at::Type based. This broke cpp extensions that relied on dispatch macros. Since IMO these should be ScalarType based, we allow either at::Type or at::ScalarType to be passed, but passing at::Type will result in a deprecated warning. 2) Reintroduce macros that were deleted (AT_DISPATCH_ALL_TYPES_AND_HALF, AT_DISPATCH_COMPLEX_TYPES, AT_DISPATCH_ALL_TYPES_AND_HALF_AND_COMPLEX, AT_DISPATCH_ALL_TYPES_AND_COMPLEX); the AND_HALF ones now give a deprecated warning because there are more extensible macros that were introduced in their place. 3) Makes AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND into a ScalarType based macro (and updates usages). This was the result of a logical merge conflicts.
Summary: Changes: 1) pytorch/pytorch#17527 changed dispatch macros to be ScalarType based instead of at::Type based. This broke cpp extensions that relied on dispatch macros. Since IMO these should be ScalarType based (and some extensions have already updated), we allow either at::Type or at::ScalarType to be passed, but passing at::Type will result in a deprecated warning. 2) Reintroduce macros that were deleted (AT_DISPATCH_ALL_TYPES_AND_HALF, AT_DISPATCH_COMPLEX_TYPES, AT_DISPATCH_ALL_TYPES_AND_HALF_AND_COMPLEX, AT_DISPATCH_ALL_TYPES_AND_COMPLEX); the AND_HALF ones now give a deprecated warning because there are more extensible macros that were introduced in their place. 3) Makes AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND into a ScalarType based macro (and updates usages). This was the result of a logical merge conflicts. 4) Adds a new macro, C10_DEPRECATED_MESSAGE for passing a deprecated message to the compiler. I didn't spend much time seeing if this can be enabled for versions before C++14. Pull Request resolved: pytorch/pytorch#17996 Reviewed By: ezyang Differential Revision: D14446203 Pulled By: gchanan fbshipit-source-id: 1da56e2e9c15aa8f913ebbf6bf1110c5b6dc375e
Stack:
:black_circle: #17527 Change Dispatch.h to use ScalarType over Type 💚
:white_circle: #17529 Remove some simple use cases of Type::ScalarType() 💚
:white_circle: #17530 Small clean up of aten_op 💚
:white_circle: #17601 Store ScalarType and Backend instead of Type in TensorIterator 💚
:white_circle: #17785 Remove Type::elementSizeInBytes 💚
:white_circle: #17723 Store python default type as PyTensorType instead of at::Type 💚
:white_circle: #17786 Pass ScalarType separately from Type in python constructors 💚
:white_circle: #17792 Remove Type::ScalarType() 💚
:white_circle: #17603 Remove conversion operator from Type to TensorOptions 💛
:white_circle: #17787 Add ScalarType arg to Type::options() 💛
Differential Revision: D14235395