-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Enable all and any for bool tensors #21033
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
colesbury
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs a test for non-empty any/all
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
|
|
||
| Tensor result = at::empty({0}, self.options()); | ||
| auto iter = make_reduction( | ||
| "all", result, self, {}, false, at::ScalarType::Byte); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fine for now, but the explicit return type was intentional. numpy.all and numpy.any always return a bool array even for non-bool inputs.
ezyang
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved with comments; I agree with Sam, non-empty case needs testing (or better yet, find where we are testing all/any previously and generalize that test code.)
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
|
@pytorchbot retest this please |
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Summary: Pull Request resolved: pytorch/pytorch#21033 ghimport-source-id: 35fdcf27b0bde8ec3e5b3051cf0d730f20f94783 Differential Revision: D15530497 Pulled By: izdeby fbshipit-source-id: 9c15cc960055f59a05ce0276f9d51c567626d966
Stack from ghstack:
This PR is a part of a stack which will change result tensor type of comparison ops from uint8 to bool. As this change is rather big and a lot of prep work is needed, im breaking it into a stack.
Changes in this PR:
Differential Revision: D15530497