-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Move non inductor workflows cuda 12.6->cuda 12.8 #155234
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/155234
Note: Links to docs will display an error until the docs builds have been completed. ❌ 4 New Failures, 1 Cancelled Job, 7 Unrelated FailuresAs of commit bd33e48 with merge base 8892b78 ( NEW FAILURES - The following jobs have failed:
CANCELLED JOB - The following job was cancelled. Please retry:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
3e6a325 to
875345a
Compare
|
Observing these failures on trunk: inductor/test_inductor_freezing.py::FreezingGpuTests::test_cpp_wrapper_cuda GH job link HUD commit link test_expanded_weights.py::TestExpandedWeightModuleCUDA::test_module_nn_GRU_eval_mode_cuda_float32 GH job link HUD commit link test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_conv1d_cuda_float32 GH job link HUD commit link Looks like errors on this issue are pre-existing on trunk this Friday |
875345a to
24d5cb3
Compare
|
@pytorchmergebot merge -f "lint is green all other tests have been already run" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
@pytorchbot revert -m "causing a bunch of tests to fail? ex test_nn.py::TestNNDeviceTypeCUDA::test_variable_sequence_cuda_float32 GH job link HUD commit link, some of the failures attributed to broken trunk on friday seem real?" -c ignoredsignal |
|
@pytorchbot successfully started a revert job. Check the current status here. |
This reverts commit ede6ead. Reverted #155234 on behalf of https://github.com/clee2000 due to causing a bunch of tests to fail? ex test_nn.py::TestNNDeviceTypeCUDA::test_variable_sequence_cuda_float32 [GH job link](https://github.com/pytorch/pytorch/actions/runs/15545607752/job/43773157441) [HUD commit link](https://hud.pytorch.org/pytorch/pytorch/commit/ede6ead8cd8e925cb093f2b3016342e645bd728d), some of the failures attributed to broken trunk on friday seem real? ([comment](#155234 (comment)))
|
@atalman your PR has been successfully reverted. |
dd43110 to
39272f2
Compare
|
This are existing issues: test_foreach.py::TestForeachCUDA::test_pointwise_op_with_tensor_of_scalarlist_overload__foreach_addcdiv_is_fastpath_True_cuda_complex128 GH job link HUD commit link |
|
@pytorchmergebot merge -f "all previously failed workflows are passing now" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
test_torch.py::TestTorchDeviceTypeCUDA::test_storage_use_count_cuda: Added in #150059 Fails in debug mode [GH job link](https://github.com/pytorch/pytorch/actions/runs/15856606665/job/44706020831) [HUD commit link](https://hud.pytorch.org/pytorch/pytorch/commit/4491326fb0c0e67eca1598ae33c41cdfced2cd33) inductor/test_inductor_freezing.py::FreezingGpuTests::test_cpp_wrapper_cuda: [GH job link](https://github.com/pytorch/pytorch/actions/runs/15856606665/job/44707119967) [HUD commit link](https://hud.pytorch.org/pytorch/pytorch/commit/4491326fb0c0e67eca1598ae33c41cdfced2cd33) started failing after moving to new cuda version #155234 I'll ping people if this gets merged Pull Request resolved: #156731 Approved by: https://github.com/huydhn
test_torch.py::TestTorchDeviceTypeCUDA::test_storage_use_count_cuda: Added in pytorch#150059 Fails in debug mode [GH job link](https://github.com/pytorch/pytorch/actions/runs/15856606665/job/44706020831) [HUD commit link](https://hud.pytorch.org/pytorch/pytorch/commit/4491326fb0c0e67eca1598ae33c41cdfced2cd33) inductor/test_inductor_freezing.py::FreezingGpuTests::test_cpp_wrapper_cuda: [GH job link](https://github.com/pytorch/pytorch/actions/runs/15856606665/job/44707119967) [HUD commit link](https://hud.pytorch.org/pytorch/pytorch/commit/4491326fb0c0e67eca1598ae33c41cdfced2cd33) started failing after moving to new cuda version pytorch#155234 I'll ping people if this gets merged Pull Request resolved: pytorch#156731 Approved by: https://github.com/huydhn
test_torch.py::TestTorchDeviceTypeCUDA::test_storage_use_count_cuda: Added in #150059 Fails in debug mode [GH job link](https://github.com/pytorch/pytorch/actions/runs/15856606665/job/44706020831) [HUD commit link](https://hud.pytorch.org/pytorch/pytorch/commit/4491326fb0c0e67eca1598ae33c41cdfced2cd33) inductor/test_inductor_freezing.py::FreezingGpuTests::test_cpp_wrapper_cuda: [GH job link](https://github.com/pytorch/pytorch/actions/runs/15856606665/job/44707119967) [HUD commit link](https://hud.pytorch.org/pytorch/pytorch/commit/4491326fb0c0e67eca1598ae33c41cdfced2cd33) started failing after moving to new cuda version #155234 I'll ping people if this gets merged Pull Request resolved: #156731 Approved by: https://github.com/huydhn (cherry picked from commit 2ff3280)
[ez] Disable some failing periodic tests (#156731) test_torch.py::TestTorchDeviceTypeCUDA::test_storage_use_count_cuda: Added in #150059 Fails in debug mode [GH job link](https://github.com/pytorch/pytorch/actions/runs/15856606665/job/44706020831) [HUD commit link](https://hud.pytorch.org/pytorch/pytorch/commit/4491326fb0c0e67eca1598ae33c41cdfced2cd33) inductor/test_inductor_freezing.py::FreezingGpuTests::test_cpp_wrapper_cuda: [GH job link](https://github.com/pytorch/pytorch/actions/runs/15856606665/job/44707119967) [HUD commit link](https://hud.pytorch.org/pytorch/pytorch/commit/4491326fb0c0e67eca1598ae33c41cdfced2cd33) started failing after moving to new cuda version #155234 I'll ping people if this gets merged Pull Request resolved: #156731 Approved by: https://github.com/huydhn (cherry picked from commit 2ff3280) Co-authored-by: Catherine Lee <[email protected]>
Move non inductor workflows cuda 12.6->cuda 12.8