Skip to content

Conversation

@xuhdev
Copy link
Collaborator

@xuhdev xuhdev commented Oct 2, 2019

f362a5a reverted
5ca612b due to build time conerns (also
see #25254). Now we come back to this by reusing the underlying code in
comparison operators: Logical operators on non-bool variables are
essentially comparison operators that semantically output bool
values. Compared with the previous implementation, we compromise by
always applying XOR on the same input type, while output can be either
the input type or the bool type.

@xuhdev xuhdev requested review from gchanan and ifedan October 2, 2019 22:39
@pytorchbot pytorchbot added module: cpu CPU specific problem (e.g., perf, algorithm) module: cuda Related to torch.cuda, and CUDA support in general module: docs Related to our documentation, both in docs/ and docblocks module: operators labels Oct 2, 2019
@ifedan
Copy link
Contributor

ifedan commented Oct 3, 2019

There are some issues with tests

@xuhdev
Copy link
Collaborator Author

xuhdev commented Oct 3, 2019

@pytorchbot retest this please

@jerryzh168 jerryzh168 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Oct 5, 2019
@ifedan
Copy link
Contributor

ifedan commented Oct 8, 2019

We do not support Byte output for logical operation any more. So some test are failing.

@xuhdev
Copy link
Collaborator Author

xuhdev commented Oct 8, 2019

We do not support Byte output for logical operation any more. So some test are failing.

Even if we do out=a byte tensor? I thought the output is bool only if there is no output type specified (e.g., a > b instead of a.gt(b, out=c)).

@xuhdev
Copy link
Collaborator Author

xuhdev commented Oct 8, 2019

@ifedan I changed the test

@ifedan
Copy link
Contributor

ifedan commented Oct 10, 2019

You have some failures in tests, because for inplace operation a.op_(b) we expect that a and b has same dtype, otherwise it's possible to get an overflow(Example: 'a' is uint8, 'b' is float32. 'a' will be promoted to float32 and the result will be also float32. Then it will be casted back to uint8 so potential for overflow). So we check that both dtypes of a and b are same

@xuhdev
Copy link
Collaborator Author

xuhdev commented Oct 11, 2019

@ifedan I've updated the overtest again. Thanks!

@ifedan
Copy link
Contributor

ifedan commented Oct 11, 2019

logical_xor_cpu is not implemented for 'Half'

@xuhdev
Copy link
Collaborator Author

xuhdev commented Oct 11, 2019

Updated again

@xuhdev xuhdev force-pushed the logical-xor branch 2 times, most recently from ca49c76 to 5ea8a22 Compare October 13, 2019 02:11
@xuhdev
Copy link
Collaborator Author

xuhdev commented Oct 13, 2019

I skipped the test case where the dtype of either operand is bfloat16, as it seems like type promotion hasn't been handled for it yet for comparison ops. Probably it is more appropriate to fix in another PR.

In [2]: a=torch.tensor([0, 0], device='cpu', dtype=torch.int)

In [3]: b=torch.tensor([0, 0], device='cpu', dtype=torch.bfloat16)

In [4]: a == b
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-4-eeadcaeab874> in <module>
----> 1 a == b

RuntimeError: dtype != ScalarType::Undefined INTERNAL ASSERT FAILED at /home/hong/xusrc/pytorch/aten/src/ATen/native/TensorIterator.cpp:97, please report a bug to PyTorch.

@xuhdev xuhdev force-pushed the logical-xor branch 4 times, most recently from 893ab03 to b9ae14f Compare October 14, 2019 07:28
@xuhdev
Copy link
Collaborator Author

xuhdev commented Oct 14, 2019

@pytorchbot rebase this please

f362a5a reverted
5ca612b due to build time conerns (also
see pytorch#25254). Now we come back to this by reusing the underlying code in
comparison operators: Logical operators on non-bool variables are
essentially comparison operators that semantically output bool
values. Compared with the previous implementation, we compromise by
always applying XOR on the same input type, while output can be either
the input type or the bool type.
@xuhdev
Copy link
Collaborator Author

xuhdev commented Oct 15, 2019

@pytorchbot merge this please

@pytorchbot pytorchbot added the merge-this-please Was marked for merge with @pytorchbot merge this please label Oct 15, 2019
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Oct 15, 2019
Summary:
f362a5a04b3708355b08e5c1285e46ca1b537ad6 reverted
5ca612b55ec1205f98e6bc5d5e64b1bf35f3b3cd due to build time conerns (also
see pytorch/pytorch#25254). Now we come back to this by reusing the underlying code in
comparison operators: Logical operators on non-bool variables are
essentially comparison operators that semantically output bool
values. Compared with the previous implementation, we compromise by
always applying XOR on the same input type, while output can be either
the input type or the bool type.
Pull Request resolved: pytorch/pytorch#27248

Differential Revision: D17929356

Pulled By: ezyang

fbshipit-source-id: dbac08c7614b36f05d24c69104fee9df9ca523d5
Tensor& comparison_op_out(Tensor& result, const Tensor& self, const Tensor& other, Stub& stub) {
TORCH_CHECK(result.scalar_type() == kBool,
"The output tensor of lt must be a bool, but was ", result.scalar_type());
"The output tensor of a comparison or logical op must be a bool, but was ", result.scalar_type());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused -- the documentation shows the output being uint8, but this error messages says it can be a bool. What's up?

@xuhdev xuhdev deleted the logical-xor branch October 15, 2019 19:08
for other_dtype in torch.testing.get_all_dtypes():
# Skip bfloat16 on CUDA
if device != 'cpu' and torch.bfloat16 in (dtype, other_dtype):
continue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we generally don't test like this -- it's not good because we will eventually enable this op but we won't realize we need to enable testing. It's much better to assert that the test throws, so when we add support we are forced to test it.

xuhdev added a commit to xuhdev/pytorch that referenced this pull request Oct 15, 2019
@facebook-github-bot
Copy link
Contributor

@ezyang merged this pull request in e6a7140.

facebook-github-bot pushed a commit that referenced this pull request Oct 16, 2019
Summary:
Following up #27248. per suggestion by gchanan
Pull Request resolved: #28031

Differential Revision: D17962226

Pulled By: gchanan

fbshipit-source-id: 788e4e1fc78b1cfc7915aedaa10c8656b19edc4d
thiagocrepaldi pushed a commit to thiagocrepaldi/pytorch that referenced this pull request Feb 4, 2020
Summary:
f362a5a reverted
5ca612b due to build time conerns (also
see pytorch#25254). Now we come back to this by reusing the underlying code in
comparison operators: Logical operators on non-bool variables are
essentially comparison operators that semantically output bool
values. Compared with the previous implementation, we compromise by
always applying XOR on the same input type, while output can be either
the input type or the bool type.
Pull Request resolved: pytorch#27248

Differential Revision: D17929356

Pulled By: ezyang

fbshipit-source-id: dbac08c7614b36f05d24c69104fee9df9ca523d5
thiagocrepaldi pushed a commit to thiagocrepaldi/pytorch that referenced this pull request Feb 4, 2020
Summary:
Following up pytorch#27248. per suggestion by gchanan
Pull Request resolved: pytorch#28031

Differential Revision: D17962226

Pulled By: gchanan

fbshipit-source-id: 788e4e1fc78b1cfc7915aedaa10c8656b19edc4d
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

merge-this-please Was marked for merge with @pytorchbot merge this please Merged module: cpu CPU specific problem (e.g., perf, algorithm) module: cuda Related to torch.cuda, and CUDA support in general module: docs Related to our documentation, both in docs/ and docblocks open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants