-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Fix torch.triu / torch.tril on contiguous tensors with non-default st… #22730
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…rides Changelog: - Fix behavior of `torch.triu` / `torch.tril` on certain unsqueezed tensors that lead to uninitialized values on CPU Test plan: - Add tests for these cases in test_triu_tril in test_torch
| * Contiguous tensors and tensors with less than 3 dimensions pass this check | ||
| */ | ||
| static inline bool checkTrilTriuBatchContiguous(const Tensor& tensor) { | ||
| static inline std::tuple<bool, Tensor> checkTrilTriuBatchContiguous(const Tensor& tensor) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Returning two arguments like this is confusing to read (at least for me), but I'm not sure how to improve it.
Maybe a signature like
Tensor coerceTrilTriuBatchContiguous(const Tensor& tensor, bool* was_already_batch_contiguous)
might be clearer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about keeping the code more descriptive, and hence didn't consider a LAPACK-like return strategy.
But if you think that it reads better, I can make a change like below:
Tensor coerceTrilTriuBatchContiguous(const Tensor& tensor, bool& was_already_batched) {
...
}There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's just keep what you have for now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, thanks for the review.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zou3519 is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@pytorchbot rebase this please |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
…… (#22730) Summary: …rides Changelog: - Fix behavior of `torch.triu` / `torch.tril` on certain unsqueezed tensors that lead to uninitialized values on CPU Pull Request resolved: pytorch/pytorch#22730 Test Plan: - Add tests for these cases in test_triu_tril in test_torch Fixes pytorch/pytorch#22581 Differential Revision: D16222897 Pulled By: zou3519 fbshipit-source-id: b86b060187797e5cd2a7731421dff1ba2b5c9596
…rides
Changelog:
torch.triu/torch.trilon certain unsqueezed tensors that lead to uninitialized values on CPUTest plan:
Fixes #22581