Skip to content

Conversation

@andrii-uniq
Copy link
Contributor

Signed-off-by: Andrii Prymostka [email protected]

@ebrevdo
Copy link
Contributor

ebrevdo commented Aug 23, 2018

Does this help for long sequence lengths?

@andrii-uniq
Copy link
Contributor Author

For N=20000 (with code from #4193 ) plot looks like:

@googlebot
Copy link

We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google.
In order to pass this check, please resolve this problem and have the pull request author add another comment and the bot will run again. If the bot doesn't comment, it means it doesn't think anything has changed.

@googlebot
Copy link

CLAs look good, thanks!

@googlebot googlebot added cla: yes and removed cla: no labels Sep 6, 2018
@ebrevdo
Copy link
Contributor

ebrevdo commented Oct 22, 2018

Sorry for the delay. You may need to rebase on head?

@andrii-uniq
Copy link
Contributor Author

Commit can be rebased without any merge.
Actually in my repo it is now on top of ee51015

@akshaym akshaym added the awaiting review Pull request awaiting review label Nov 6, 2018
@OldBlacky
Copy link

Hi,
any news on this?
This patch would really solve some of my problems.

@dksb dksb added the size:XL CL Change Size:Extra Large label Jan 2, 2019
@tensorflowbutler
Copy link
Member

Nagging Reviewer @ebrevdo: You have been added as a reviewer to this pull request. Please add your review or reassign. It has been 164 days with no activity and the awaiting review label has been applied.

@rthadur rthadur assigned rthadur and unassigned akshaym Jun 26, 2019
@rthadur
Copy link
Contributor

rthadur commented Jul 8, 2019

I'm going to go ahead and close this PR, because it seems to have stalled. If you're still interested in pursing this (and responding to my comments), please feel free to reopen!

@rthadur rthadur closed this Jul 8, 2019
@andrii-uniq
Copy link
Contributor Author

I am interested in merging this request and answering to comments but not in rebasing changes because of lack of knowledge of current codebase.

@rthadur
Copy link
Contributor

rthadur commented Jul 22, 2019

@aprimostka thank you , please open a new PR with this changes so that you can avoid rebasing.

copybara-service bot pushed a commit that referenced this pull request Jan 28, 2025
Imported from GitHub PR openxla/xla#21822

Created `ShouldUsePtxExtension` helper for the extension suffix (this will also be used for sm120, etc).

CUDA 12.8 was recently released, which supports PTX 8.7, but that is not supported by the integrated LLVM (support added in llvm/llvm-project#124155), so leaving the association with PTX 8.6 - this doesn't raise warnings during compilation.

Copybara import of the project:

--
267cf74a084c933e532a622da2485befdc47f8ce by Sergey Kozub <[email protected]>:

Add support for SM100a architecture (Blackwell)

Merging this change closes #21822

FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#21822 from openxla:devel/sm100a 267cf74a084c933e532a622da2485befdc47f8ce
PiperOrigin-RevId: 720655796
copybara-service bot pushed a commit that referenced this pull request Jan 29, 2025
Imported from GitHub PR openxla/xla#21822

Created `ShouldUsePtxExtension` helper for the extension suffix (this will also be used for sm120, etc).

CUDA 12.8 was recently released, which supports PTX 8.7, but that is not supported by the integrated LLVM (support added in llvm/llvm-project#124155), so leaving the association with PTX 8.6 - this doesn't raise warnings during compilation.

Copybara import of the project:

--
267cf74a084c933e532a622da2485befdc47f8ce by Sergey Kozub <[email protected]>:

Add support for SM100a architecture (Blackwell)

Merging this change closes #21822

FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#21822 from openxla:devel/sm100a 267cf74a084c933e532a622da2485befdc47f8ce
PiperOrigin-RevId: 720655796
copybara-service bot pushed a commit that referenced this pull request Jan 29, 2025
Imported from GitHub PR openxla/xla#21822

Created `ShouldUsePtxExtension` helper for the extension suffix (this will also be used for sm120, etc).

CUDA 12.8 was recently released, which supports PTX 8.7, but that is not supported by the integrated LLVM (support added in llvm/llvm-project#124155), so leaving the association with PTX 8.6 - this doesn't raise warnings during compilation.

Copybara import of the project:

--
267cf74a084c933e532a622da2485befdc47f8ce by Sergey Kozub <[email protected]>:

Add support for SM100a architecture (Blackwell)

Merging this change closes #21822

PiperOrigin-RevId: 720806648
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

awaiting review Pull request awaiting review cla: yes size:XL CL Change Size:Extra Large

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants