Skip to content

Conversation

@izdeby
Copy link
Contributor

@izdeby izdeby commented May 28, 2019

Stack from ghstack:

This PR is a part of a stack which will change result tensor type of comparison ops from uint8 to bool. As this change is rather big and a lot of prep work is needed, im breaking it into a stack.

Changes in this PR:

  • Enable all() and any() for bool tensors.

Differential Revision: D15530497

Copy link
Member

@colesbury colesbury left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs a test for non-empty any/all

Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
@izdeby izdeby changed the title Enable all and any for bool tensors [WIP] Enable all and any for bool tensors May 28, 2019
izdeby added 2 commits May 28, 2019 15:38
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
@izdeby izdeby changed the title [WIP] Enable all and any for bool tensors Enable all and any for bool tensors May 29, 2019
@izdeby izdeby requested a review from colesbury May 29, 2019 00:05
izdeby added 2 commits May 28, 2019 17:43
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head

Tensor result = at::empty({0}, self.options());
auto iter = make_reduction(
"all", result, self, {}, false, at::ScalarType::Byte);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fine for now, but the explicit return type was intentional. numpy.all and numpy.any always return a bool array even for non-bool inputs.

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved with comments; I agree with Sam, non-empty case needs testing (or better yet, find where we are testing all/any previously and generalize that test code.)

izdeby added 5 commits May 29, 2019 08:55
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
@izdeby
Copy link
Contributor Author

izdeby commented May 30, 2019

@pytorchbot retest this please

Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
izdeby added 9 commits May 29, 2019 19:43
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
Enable all and any for bool tensors

gh-metadata: pytorch pytorch 21033 gh/izdeby/4/head
@zou3519 zou3519 deleted the gh/izdeby/4/head branch May 30, 2019 23:18
zdevito pushed a commit to zdevito/ATen that referenced this pull request May 31, 2019
Summary:
Pull Request resolved: pytorch/pytorch#21033
ghimport-source-id: 35fdcf27b0bde8ec3e5b3051cf0d730f20f94783

Differential Revision: D15530497

Pulled By: izdeby

fbshipit-source-id: 9c15cc960055f59a05ce0276f9d51c567626d966
@facebook-github-bot
Copy link
Contributor

@izdeby merged this pull request in 64f06d4.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants