Skip to content

Conversation

@vishwakftw
Copy link
Contributor

@vishwakftw vishwakftw commented Jul 11, 2019

…rides

Changelog:

  • Fix behavior of torch.triu / torch.tril on certain unsqueezed tensors that lead to uninitialized values on CPU

Test plan:

  • Add tests for these cases in test_triu_tril in test_torch

Fixes #22581

…rides

Changelog:
- Fix behavior of `torch.triu` / `torch.tril` on certain unsqueezed tensors that lead to uninitialized values on CPU

Test plan:
- Add tests for these cases in test_triu_tril in test_torch
* Contiguous tensors and tensors with less than 3 dimensions pass this check
*/
static inline bool checkTrilTriuBatchContiguous(const Tensor& tensor) {
static inline std::tuple<bool, Tensor> checkTrilTriuBatchContiguous(const Tensor& tensor) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Returning two arguments like this is confusing to read (at least for me), but I'm not sure how to improve it.

Maybe a signature like

Tensor coerceTrilTriuBatchContiguous(const Tensor& tensor, bool* was_already_batch_contiguous)

might be clearer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about keeping the code more descriptive, and hence didn't consider a LAPACK-like return strategy.

But if you think that it reads better, I can make a change like below:

Tensor coerceTrilTriuBatchContiguous(const Tensor& tensor, bool& was_already_batched) {
  ...
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's just keep what you have for now

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, thanks for the review.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zou3519 is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@vishwakftw
Copy link
Contributor Author

@pytorchbot rebase this please

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@zou3519 merged this pull request in f8ad65a.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Jul 16, 2019
…… (#22730)

Summary:
…rides

Changelog:
- Fix behavior of `torch.triu` / `torch.tril` on certain unsqueezed tensors that lead to uninitialized values on CPU
Pull Request resolved: pytorch/pytorch#22730

Test Plan:
- Add tests for these cases in test_triu_tril in test_torch

Fixes pytorch/pytorch#22581

Differential Revision: D16222897

Pulled By: zou3519

fbshipit-source-id: b86b060187797e5cd2a7731421dff1ba2b5c9596
@vishwakftw vishwakftw deleted the tril-triu-cpu-fix branch August 2, 2019 03:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Batched Triu And Tril Incorrect for Some Inputs

6 participants