Skip to content

Conversation

@li-roy
Copy link
Contributor

@li-roy li-roy commented Feb 27, 2019

Stack:
    :black_circle:  #17527 Change Dispatch.h to use ScalarType over Type  💚
    :white_circle:  #17529 Remove some simple use cases of Type::ScalarType()  💚
    :white_circle:  #17530 Small clean up of aten_op  💚
    :white_circle:  #17601 Store ScalarType and Backend instead of Type in TensorIterator  💚
    :white_circle:  #17785 Remove Type::elementSizeInBytes  💚
    :white_circle:  #17723 Store python default type as PyTensorType instead of at::Type  💚
    :white_circle:  #17786 Pass ScalarType separately from Type in python constructors  💚
    :white_circle:  #17792 Remove Type::ScalarType()  💚
    :white_circle:  #17603 Remove conversion operator from Type to TensorOptions  💛
    :white_circle:  #17787 Add ScalarType arg to Type::options()  💛

Differential Revision: D14235395

royboy added 3 commits February 26, 2019 14:15
Differential Revision: D14233250
Differential Version: 73474786
Differential Revision: D14233250
Differential Version: 73486177
Differential Revision: D14235395
Differential Version: 73486176
Differential Revision: D14235395
Differential Version: 73589963
royboy added 3 commits March 5, 2019 19:12
Differential Revision: D14235395
Differential Version: 74395307
Differential Revision: D14233250
Differential Version: 74395306
Differential Revision: D14235395
Differential Version: 74488005
royboy added 2 commits March 6, 2019 16:33
Differential Revision: D14233250
Differential Version: 74537076
Differential Revision: D14235395
Differential Version: 74537072
Copy link
Contributor

@gchanan gchanan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This made a number of error messages worse.

For example, let's say I have a CPU and CUDA implementation of the same function, fn, which I give in both cases to the dispatch macro as "fn". And as with most functions, I implement the half variant on CUDA, but not CPU.

Then, the user tries to call "fn" with CPU half, and gets the message "fn is not implemented for Half" which is wrong.

Can you just go through and find the cases where this exists (if such a case exists, note it will exist if you shepherd TH functions through this macro).

royboy added 2 commits March 7, 2019 16:27
Differential Revision: D14233250
Differential Version: 74694645
Differential Revision: D14235395
Differential Version: 74694653
royboy added 2 commits March 7, 2019 18:19
Differential Revision: D14235395
Differential Version: 74711130
Differential Revision: D14235395
Differential Version: 74714416
Tensor self_c = inplace ? self : self.contiguous();
Tensor result = inplace ? self : at::empty_like(self);
AT_DISPATCH_ALL_TYPES(self.type(), "tril", [&]{
AT_DISPATCH_ALL_TYPES(self.scalar_type(), "tril", [&]{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tril_cpu here

}
Tensor self_c = checkTrilTriuBatchContiguous(self) ? self : self.contiguous();
AT_DISPATCH_ALL_TYPES(self.type(), "tril", [&]{
AT_DISPATCH_ALL_TYPES(self.scalar_type(), "tril", [&]{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

Tensor self_c = inplace ? self : self.contiguous();
Tensor result = inplace ? self : at::empty_like(self);
AT_DISPATCH_ALL_TYPES(self.type(), "triu", [&]{
AT_DISPATCH_ALL_TYPES(self.scalar_type(), "triu", [&]{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

}
Tensor self_c = checkTrilTriuBatchContiguous(self) ? self : self.contiguous();
AT_DISPATCH_ALL_TYPES(self.type(), "triu", [&]{
AT_DISPATCH_ALL_TYPES(self.scalar_type(), "triu", [&]{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

if (!in_parallel_region()) {
AT_DISPATCH_ALL_TYPES_AND(
at::ScalarType::Half, self.type(), "_copy_same_type_", [&]() {
at::ScalarType::Half, self.scalar_type(), "_copy_same_type_", [&]() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cpu

if (serial_path) {
AT_DISPATCH_ALL_TYPES_AND(
at::ScalarType::Half, self.type(), "_copy_same_type_", [&]() {
at::ScalarType::Half, self.scalar_type(), "_copy_same_type_", [&]() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cpu


AT_DISPATCH_FLOATING_TYPES(input.type(),
AT_DISPATCH_FLOATING_TYPES(input.scalar_type(),
"fractional_max_pool2d_out_frame", [&] {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cpu

/* backprop */
AT_DISPATCH_FLOATING_TYPES(
input.type(), "fractional_max_pool2d_backward_out_frame", [&] {
input.scalar_type(), "fractional_max_pool2d_backward_out_frame", [&] {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cpu

AT_DISPATCH_FLOATING_TYPES(
input.type(),
input.scalar_type(),
"fractional_max_pool3d_out_frame",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cpu

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I stopped something like 30% of the way through the patch, but it seems there are still a lot of missing sites

@li-roy
Copy link
Contributor Author

li-roy commented Mar 8, 2019

@ezyang I only changed the strings that are used on both cpu and gpu kernels.

royboy added 4 commits March 8, 2019 13:10
Differential Revision: D14233250
Differential Version: 74781242
Differential Revision: D14235395
Differential Version: 74781247
Differential Revision: D14235395
Differential Version: 74811455
Differential Revision: D14235395
Differential Version: 74818459
@li-roy li-roy changed the base branch from export-D14233250 to master March 8, 2019 22:18
zdevito pushed a commit to zdevito/ATen that referenced this pull request Mar 9, 2019
Summary: Pull Request resolved: pytorch/pytorch#17527

Reviewed By: zou3519

Differential Revision: D14235395

fbshipit-source-id: 3f53e33f6794f1f14c2edf79014b8ef8397822c5
gchanan added a commit to gchanan/pytorch that referenced this pull request Mar 13, 2019
Changes:
1) pytorch#17527 changed dispatch macros to be ScalarType based instead of
   at::Type based.  This broke cpp extensions that relied on dispatch macros.  Since IMO these should be
   ScalarType based, we allow either at::Type or at::ScalarType to be passed, but passing at::Type will
   result in a deprecated warning.
2) Reintroduce macros that were deleted (AT_DISPATCH_ALL_TYPES_AND_HALF, AT_DISPATCH_COMPLEX_TYPES,
   AT_DISPATCH_ALL_TYPES_AND_HALF_AND_COMPLEX, AT_DISPATCH_ALL_TYPES_AND_COMPLEX); the AND_HALF ones
   now give a deprecated warning because there are more extensible macros that were introduced in their place.
3) Makes AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND into a ScalarType based macro (and updates usages).  This was
   the result of a logical merge conflicts.
zdevito pushed a commit to zdevito/ATen that referenced this pull request Mar 15, 2019
Summary:
Changes:
1) pytorch/pytorch#17527 changed dispatch macros to be ScalarType based instead of at::Type based.  This broke cpp extensions that relied on dispatch macros.  Since IMO these should be ScalarType based (and some extensions have already updated), we allow either at::Type or at::ScalarType to be passed, but passing at::Type will result in a deprecated warning.

2) Reintroduce macros that were deleted (AT_DISPATCH_ALL_TYPES_AND_HALF, AT_DISPATCH_COMPLEX_TYPES, AT_DISPATCH_ALL_TYPES_AND_HALF_AND_COMPLEX, AT_DISPATCH_ALL_TYPES_AND_COMPLEX); the AND_HALF ones now give a deprecated warning because there are more extensible macros that were introduced in their place.

3) Makes AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND into a ScalarType based macro (and updates usages).  This was the result of a logical merge conflicts.

4) Adds a new macro, C10_DEPRECATED_MESSAGE for passing a deprecated message to the compiler.  I didn't spend much time seeing if this can be enabled for versions before C++14.
Pull Request resolved: pytorch/pytorch#17996

Reviewed By: ezyang

Differential Revision: D14446203

Pulled By: gchanan

fbshipit-source-id: 1da56e2e9c15aa8f913ebbf6bf1110c5b6dc375e
@ezyang ezyang deleted the export-D14235395 branch May 30, 2019 15:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants