-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[TEST] Modernize test_sort_large #155546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TEST] Modernize test_sort_large #155546
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/155546
Note: Links to docs will display an error until the docs builds have been completed. ⏳ 1 Pending, 2 Unrelated FailuresAs of commit fea7174 with merge base 3863bbb ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
| self.assertEqual(vm, torch.arange(8192, dtype=dtype, device=device)) | ||
| self.assertEqual(im, t0.sort().indices, exact_dtype=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we this test in CI?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, CI machines are not that big, but I have tested it on GB300 which has 288GB of memory.
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Currently std::min -> ::min did not work as expected on ROCm when input values >= 2147483648 Replace std::min to ternary statement Also std::min can be replaced by explicit typing std::min<int64_t> fixes on ROCm: test_sort_and_select.py::TestSortAndSelectCUDA::test_sort_large_cuda_float16 error: RuntimeError: Cannot sort dimension of length 8192 Combines upstream PRs: - pytorch#161054 to fix std::min on ROCm - pytorch#155546 fix python test - pytorch#159939 change test dtype from int8 to float16 Fixes: SWDEV-526432
Currently std::min -> ::min did not work as expected on ROCm when input values >= 2147483648 Replace std::min to ternary statement Also std::min can be replaced by explicit typing std::min<int64_t> fixes on ROCm: test_sort_and_select.py::TestSortAndSelectCUDA::test_sort_large_cuda_float16 error: RuntimeError: Cannot sort dimension of length 8192 Combines upstream PRs: - pytorch#161054 to fix std::min on ROCm - pytorch#155546 fix python test - pytorch#159939 change test dtype from int8 to float16 Fixes: SWDEV-526432
Currently std::min -> ::min did not work as expected on ROCm when input values >= 2147483648 Replace std::min to ternary statement Also std::min can be replaced by explicit typing std::min<int64_t> fixes on ROCm: test_sort_and_select.py::TestSortAndSelectCUDA::test_sort_large_cuda_float16 error: RuntimeError: Cannot sort dimension of length 8192 Combines upstream PRs: - pytorch#161054 to fix std::min on ROCm - pytorch#155546 fix python test - pytorch#159939 change test dtype from int8 to float16 Fixes: SWDEV-526432
Since its introduction ~4 years ago, the test
test_sort_largehas always been deselected because it requires 200GB of CUDA memory. Now, as we do have GPUs this big, it gets selected, but fails withvar_meannot being a member iftorch.Tensorandvar_meanaccepting only floating point tensors.