Skip to content

Conversation

@izdeby
Copy link
Contributor

@izdeby izdeby commented May 28, 2019

Stack from ghstack:

This PR is a part of a stack which will change result tensor type of comparison ops from uint8 to bool. As this change is rather big and a lot of prep work is needed, im breaking it into a stack.

Changes in this PR:

  • Enable min, max, minall, maxall, cmin, cmax, cmaxValue, cminValue for bool tensors by moving their declarations and definitions above the #if !defined(TH_REAL_IS_BOOL) macro.

Differential Revision: D15530499

@pytorchbot pytorchbot added module: cpu CPU specific problem (e.g., perf, algorithm) module: cuda Related to torch.cuda, and CUDA support in general module: internals Related to internal abstractions in c10 and ATen module: operators labels May 28, 2019
@izdeby izdeby requested review from VitalyFedyunin, colesbury, ezyang and gchanan and removed request for ezyang May 28, 2019 19:06
…minValue for bool tensors"

Enabled min, max, minall, maxall, cmin, cmax, cmaxValue, cminValue for bool tensors

gh-metadata: pytorch pytorch 21031 gh/izdeby/2/head
@ezyang
Copy link
Contributor

ezyang commented May 28, 2019

Adding on to the documentation question from the previous PR; in general, if I want to find out what the semantics of boolean tensors are in the documentation, how can I find this out? If I look at any given function (like max) and I want to know how it behaves on a boolean tensor, how do I know or not?

izdeby added 2 commits May 28, 2019 20:01
…minValue for bool tensors"

Enabled min, max, minall, maxall, cmin, cmax, cmaxValue, cminValue for bool tensors

gh-metadata: pytorch pytorch 21031 gh/izdeby/2/head
…minValue for bool tensors"

Enabled min, max, minall, maxall, cmin, cmax, cmaxValue, cminValue for bool tensors

gh-metadata: pytorch pytorch 21031 gh/izdeby/2/head
@zou3519 zou3519 deleted the gh/izdeby/2/head branch May 29, 2019 20:25
zdevito pushed a commit to zdevito/ATen that referenced this pull request May 29, 2019
…r bool tensors (#21031)

Summary:
Pull Request resolved: pytorch/pytorch#21031
ghimport-source-id: 379b3e9d20872eb5ad14403ed6751cdb0e730bc5

Reviewed By: ezyang

Differential Revision: D15530499

Pulled By: izdeby

fbshipit-source-id: f113d6974ee18ac3dfb5c0bcff66865345d137d2
@facebook-github-bot
Copy link
Contributor

@izdeby merged this pull request in 7cb1aa6.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: cpu CPU specific problem (e.g., perf, algorithm) module: cuda Related to torch.cuda, and CUDA support in general module: internals Related to internal abstractions in c10 and ATen

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants