-
Notifications
You must be signed in to change notification settings - Fork 75.1k
Add support of CTC float64 #21822
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support of CTC float64 #21822
Conversation
|
Does this help for long sequence lengths? |
|
For N=20000 (with code from #4193 ) plot looks like: |
566f749 to
44daa72
Compare
|
We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google. |
44daa72 to
d9d31ea
Compare
|
CLAs look good, thanks! |
|
Sorry for the delay. You may need to rebase on head? |
d9d31ea to
ac9362d
Compare
|
Commit can be rebased without any merge. |
|
Hi, |
|
Nagging Reviewer @ebrevdo: You have been added as a reviewer to this pull request. Please add your review or reassign. It has been 164 days with no activity and the |
|
I'm going to go ahead and close this PR, because it seems to have stalled. If you're still interested in pursing this (and responding to my comments), please feel free to reopen! |
|
I am interested in merging this request and answering to comments but not in rebasing changes because of lack of knowledge of current codebase. |
|
@aprimostka thank you , please open a new PR with this changes so that you can avoid rebasing. |
Imported from GitHub PR openxla/xla#21822 Created `ShouldUsePtxExtension` helper for the extension suffix (this will also be used for sm120, etc). CUDA 12.8 was recently released, which supports PTX 8.7, but that is not supported by the integrated LLVM (support added in llvm/llvm-project#124155), so leaving the association with PTX 8.6 - this doesn't raise warnings during compilation. Copybara import of the project: -- 267cf74a084c933e532a622da2485befdc47f8ce by Sergey Kozub <[email protected]>: Add support for SM100a architecture (Blackwell) Merging this change closes #21822 FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#21822 from openxla:devel/sm100a 267cf74a084c933e532a622da2485befdc47f8ce PiperOrigin-RevId: 720655796
Imported from GitHub PR openxla/xla#21822 Created `ShouldUsePtxExtension` helper for the extension suffix (this will also be used for sm120, etc). CUDA 12.8 was recently released, which supports PTX 8.7, but that is not supported by the integrated LLVM (support added in llvm/llvm-project#124155), so leaving the association with PTX 8.6 - this doesn't raise warnings during compilation. Copybara import of the project: -- 267cf74a084c933e532a622da2485befdc47f8ce by Sergey Kozub <[email protected]>: Add support for SM100a architecture (Blackwell) Merging this change closes #21822 FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#21822 from openxla:devel/sm100a 267cf74a084c933e532a622da2485befdc47f8ce PiperOrigin-RevId: 720655796
Imported from GitHub PR openxla/xla#21822 Created `ShouldUsePtxExtension` helper for the extension suffix (this will also be used for sm120, etc). CUDA 12.8 was recently released, which supports PTX 8.7, but that is not supported by the integrated LLVM (support added in llvm/llvm-project#124155), so leaving the association with PTX 8.6 - this doesn't raise warnings during compilation. Copybara import of the project: -- 267cf74a084c933e532a622da2485befdc47f8ce by Sergey Kozub <[email protected]>: Add support for SM100a architecture (Blackwell) Merging this change closes #21822 PiperOrigin-RevId: 720806648

Signed-off-by: Andrii Prymostka [email protected]