Skip to content

Conversation

@ljk53
Copy link
Contributor

@ljk53 ljk53 commented Jul 11, 2019

Summary:
We introduced RTTI in recent change: #21613

For internal mobile build we don't enable '-frtti' yet. This diff is trying to replace
RTTI with alternative approach.

According to dzhulgakov we could compare two tensors' type_id directly in most cases -
which is more strict than comparing TensorImpl subclass type as TensorImpl -> type_id
mapping is 1-to-n but it's more proper for this use case.

The only two cases where we can relax direct type comparison (for legacy reason) are:

  1. CPUTensor <-> CUDATensor;
  2. SparseCPUTensor <-> SparseCUDATensor;

Differential Revision: D16212472

@ljk53 ljk53 force-pushed the export-D16212472 branch from 1f7f0d3 to db2d81a Compare July 11, 2019 22:20
@pytorchbot pytorchbot added module: autograd Related to torch.autograd, and the autograd engine in general module: internals Related to internal abstractions in c10 and ATen module: nn Related to torch.nn module: operators oncall: quantization Quantization support in PyTorch labels Jul 11, 2019
@ljk53 ljk53 requested review from dzhulgakov, gchanan and yf225 and removed request for dzhulgakov and yf225 July 12, 2019 00:29
@ljk53 ljk53 force-pushed the export-D16212472 branch from db2d81a to 2dff987 Compare July 12, 2019 00:42
@pytorchbot pytorchbot added the module: mkldnn Related to Intel IDEEP or oneDNN (a.k.a. mkldnn) integration label Jul 12, 2019
@ljk53 ljk53 force-pushed the export-D16212472 branch from 2dff987 to 54dabd2 Compare July 12, 2019 00:46
@ljk53 ljk53 force-pushed the export-D16212472 branch 2 times, most recently from 53f7415 to ac9baac Compare July 12, 2019 20:45
@ljk53 ljk53 requested a review from gchanan July 12, 2019 20:52
Copy link
Contributor

@gchanan gchanan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

approved assuming we figure out the correct thing to do for HIP.

@ljk53 ljk53 force-pushed the export-D16212472 branch 2 times, most recently from b0a7a9e to bf13962 Compare July 14, 2019 02:59
@ljk53
Copy link
Contributor Author

ljk53 commented Jul 14, 2019

@pytorchbot retest this please

@ljk53 ljk53 force-pushed the export-D16212472 branch from bf13962 to 8fca8c8 Compare July 15, 2019 17:33
Summary:
We introduced RTTI in recent change: pytorch#21613

For internal mobile build we don't enable '-frtti' yet. This diff is trying to replace
RTTI with alternative approach.

According to dzhulgakov we could compare two tensors' type_id directly in most cases -
which is more strict than comparing TensorImpl subclass type as TensorImpl -> type_id
mapping is 1-to-n but it's more proper for this use case.

The only two cases where we can relax direct type comparison (for legacy reason) are:
1. CPUTensor <-> CUDATensor;
2. SparseCPUTensor <-> SparseCUDATensor;

Differential Revision: D16212472

fbshipit-source-id: 5946ca605e86820329762f84761db9142fd06a29
@ljk53 ljk53 force-pushed the export-D16212472 branch from 8fca8c8 to 20e9ab7 Compare July 15, 2019 17:37
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ljk53 is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Jul 16, 2019
Summary:
We introduced RTTI in recent change: pytorch/pytorch#21613

For internal mobile build we don't enable '-frtti' yet. This diff is trying to replace
RTTI with alternative approach.

According to dzhulgakov we could compare two tensors' type_id directly in most cases -
which is more strict than comparing TensorImpl subclass type as TensorImpl -> type_id
mapping is 1-to-n but it's more proper for this use case.

The only two cases where we can relax direct type comparison (for legacy reason) are:
1. CPUTensor <-> CUDATensor;
2. SparseCPUTensor <-> SparseCUDATensor;
Pull Request resolved: pytorch/pytorch#22773

Differential Revision: D16277696

Pulled By: ljk53

fbshipit-source-id: 043e264fbacc37b7a11af2046983c70ddb62a599
@facebook-github-bot
Copy link
Contributor

@ljk53 merged this pull request in 3b1c399.

@gchanan
Copy link
Contributor

gchanan commented Jul 16, 2019

@ljk53 are you not using gh tools?

@ljk53 ljk53 deleted the export-D16212472 branch November 13, 2019 06:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: autograd Related to torch.autograd, and the autograd engine in general module: internals Related to internal abstractions in c10 and ATen module: mkldnn Related to Intel IDEEP or oneDNN (a.k.a. mkldnn) integration module: nn Related to torch.nn oncall: quantization Quantization support in PyTorch

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants