-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[BE]: Update cudnn to 9.10.2.21 #155576
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BE]: Update cudnn to 9.10.2.21 #155576
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/155576
Note: Links to docs will display an error until the docs builds have been completed. ❌ 8 New Failures, 3 Unrelated FailuresAs of commit 8315757 with merge base 9328a7f ( NEW FAILURES - The following jobs have failed:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
cc @atalman for help uploading the cudnn packages. |
6634853 to
2e411ca
Compare
.ci/docker/common/install_cuda.sh
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't bump the cudnn version for cu126, as we don't have enough test combination done with cu126 and this 9.10.2.21.
Recommend keeping cuDNN version unchanged for cuda 12.6.
But open to what others prefer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The other aspect to this (cuda 12.6 + 9.10.2.21) is that as time goes by, the cuda 12.6 is getting tested less in CI due to ongoing efforts to move CI from cuda 12.6 to 12.8.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important performance updates for SDPA on A100s/H100s here though, and better to support fewer CUDNN versions.
.ci/docker/common/install_cudnn.sh
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Todo fix
.ci/docker/common/install_cuda.sh
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important performance updates for SDPA on A100s/H100s here though, and better to support fewer CUDNN versions.
2e411ca to
8315757
Compare
|
@pytorchbot merge -i |
Merge startedYour change will be merged while ignoring the following 7 checks: pull / linux-jammy-py3-clang12-executorch / test (executorch, 1, 1, linux.2xlarge, unstable), macos-arm64-binary-wheel / wheel-py3_13t-cpu-build, macos-arm64-binary-wheel / wheel-py3_10-cpu-build, macos-arm64-binary-wheel / wheel-py3_12-cpu-build, macos-arm64-binary-wheel / wheel-py3_11-cpu-build, macos-arm64-binary-wheel / wheel-py3_9-cpu-build, macos-arm64-binary-wheel / wheel-py3_13-cpu-build Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
The merge job was canceled or timed out. This most often happen if two merge requests were issued for the same PR, or if merge job was waiting for more than 6 hours for tests to finish. In later case, please do not hesitate to reissue the merge command |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
Hi @Skylion007, seems 12.8 was missed in this PR .ci/docker/common/install_cuda.sh. Please followup with a fix to update, thanks. |
|
@pytorchbot revert -m "breaks the same test again (I remember there were a version that adjusted tolerances), see https://hud.pytorch.org/hud/pytorch/pytorch/bc3972b80a7abe85036f48b610532fce39ea5097/1?per_page=50&name_filter=gcc11-sm89&mergeEphemeralLF=true" -c nosignal |
|
@pytorchbot successfully started a revert job. Check the current status here. |
This reverts commit 2d3615f. Reverted #155576 on behalf of https://github.com/malfet due to breaks the same test again (I remember there were a version that adjusted tolerances), see https://hud.pytorch.org/hud/pytorch/pytorch/bc3972b80a7abe85036f48b610532fce39ea5097/1?per_page=50&name_filter=gcc11-sm89&mergeEphemeralLF=true ([comment](#155576 (comment)))
|
@Skylion007 your PR has been successfully reverted. |
|
I tuned the test here so landing this PR should fix it: #155234 |
|
@pytorchmergebot merge -f "fix for the failure is deployed #155234" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
Apologies here, should have double check this before @atalman remerged the PR. Interesting this should have raised a warning on CUDNN frontend logger, but surprised nobody reported it. I guess because the nightlies are statically linked often? |
Update to CUDNN 9.10.2.21