Skip to content

Conversation

@SeanNaren
Copy link

Refer to #633, Do not merge just yet! The modified RNN tests are failing and just trying to solve that now. Let me know of any issues/feedback

P.S there was also an issue here that this PR fixes, thought I'd throw it in here as well...

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Can you also add asserts in all modules, so that they raise an error if you pass in skip_input=True and input_size != hidden_size?

gi = F.linear(input, w_ih, b_ih)
def GRUCell(input, hidden, w_ih, w_hh, b_ih=None, b_hh=None, skip_input=False):
xw_ih = input if skip_input else F.linear(input, w_ih, b_ih)
gi = xw_ih

This comment was marked as off-topic.

@ngimel
Copy link
Collaborator

ngimel commented Jan 31, 2017

You don't have to have w_ih and b_ih parameters for the first layer if you set skip_input to true, and also be more careful in _copyParams function in cudnn backend, otherwise this line
https://github.com/SeanNaren/pytorch/blob/806b44aadf39a60013dc02c40b72d7042bcaa1a7/torch/backends/cudnn/rnn.py#L178
will break with cudnn v6 (cudnn v6 will return a 0-element descriptor for ih matrices, and you'd be trying to copy a non-zero-element parameter matrix into tensor allocated with 0 elements).

@SeanNaren
Copy link
Author

SeanNaren commented Jan 31, 2017

Thanks guys, @ngimel would it make sense to for example take:

RNNReLUCell(input, hidden, w_ih, w_hh, b_ih=None, b_hh=None, skip_input=False) from here and change this to:

RNNReLUCell(input, hidden, w_hh, w_ih=None, b_ih=None, b_hh=None, skip_input=False)

and so forth for the other modules

@apaszke
Copy link
Contributor

apaszke commented Jan 31, 2017

@SeanNaren I don't you can interleave default and non-default arguments. It'll be a syntax error

@SeanNaren
Copy link
Author

Haha yeah my bad, I edited the order of params (will require changes down through StackedRNN/Recurrent, need to look more in depth)

@SeanNaren
Copy link
Author

SeanNaren commented Feb 2, 2017

I'm getting different outputs from the cuDNN version of RNNs with skip_input. I've narrowed it down to the bias; if I set this to true (as well as skip_input to true) I get different outputs. Unless I've understood SKIP_INPUT wrongly, the bias should have no effect?

The difference can be seen by installing my branch as well as running this little script.

@ngimel
Copy link
Collaborator

ngimel commented Feb 3, 2017

@SeanNaren, I think it is a result of what could be considered a bug in cudnn - it adds bias to input even if CUDNN_SKIP_INPUT is set, and most likely there is some random data in the parameters tensor that is being passed to cudnn. It can be worked around by zeroing corresponding biases before passing parameters to cudnn, but I agree it is ugly.

@SeanNaren
Copy link
Author

SeanNaren commented Feb 6, 2017

I've added the fix to the test, but this should probably be fixed internally in cuDNN as well...

I think in the current state it can be merged into the main branch unless anyone has any feedback!

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does cuDNN require allocating the weights and biases, even when it's given CUDNN_SKIP_INPUT? They're not going to be used anyway, right?

self.skip_input = skip_input

if skip_input and input_size != hidden_size:
raise RuntimeError("Skip input requires input size to be equal to hidden size")

This comment was marked as off-topic.

test/test_nn.py Outdated
skip_input=skip_input)

# cuDNN bug, bias still used even when skip_input true
if skip_input and bias:

This comment was marked as off-topic.

This comment was marked as off-topic.

@ngimel
Copy link
Collaborator

ngimel commented Feb 6, 2017

cudnn up to 5.1.10 allocated weights irrespective of SKIP_INPUT value, the input weights were not used. As of 5.1.10 and v6 RC it was partially fixed (weights are no longer allocated, biases are allocated and added, numMatrices for getLinLayerParams and matrix "meaning" is the same irrespective of the SKIP_INPUT value, but descriptors with 0 elements are returned for input transformation matrices).

@SeanNaren
Copy link
Author

Will figure out how to remove the w_hi weight and bias in the cleanest way, will involve a little change to the functions.

@ngimel I don't totally understand what changes would need to be made for cudnn V6, could you explain what would need to be considered when the w_hi weight is now missing in SKIP_INPUT mode?

@apaszke
Copy link
Contributor

apaszke commented Feb 6, 2017

Just note that changing that might clash with #660

@ngimel
Copy link
Collaborator

ngimel commented Feb 6, 2017

@SeanNaren, I think no changes have to be made as long as you don't touch w_hi, everything should just work automagically, but you can experiment with cudnn v6 yourself (link for downloading it can be found in tools/docker/Dockerfile_v6).

@SeanNaren
Copy link
Author

@apaszke sounds good, I think it would make sense to merge that in, and let me rebase onto that before making changes?

@apaszke
Copy link
Contributor

apaszke commented Feb 6, 2017

Yeah I think that would be better. I'll try to review it soon.

@apaszke
Copy link
Contributor

apaszke commented Feb 9, 2017

@SeanNaren the other PR is merged now

@SeanNaren
Copy link
Author

Thanks, just trying to fix the test before making w_hi optional...

@SeanNaren
Copy link
Author

SeanNaren commented Feb 11, 2017

@ngimel to keep consistency when switching between cuDNN and torch RNNs, should I temporarily add the input bias in the torch RNN? Running into the issue that grads are returned for the input bias from cuDNN...

@ngimel
Copy link
Collaborator

ngimel commented Feb 11, 2017

I don't think propagating cudnn's arguably wrong behavior is a good idea. @adamlerer , @apaszke ? Input bias sent to cudnn can be zeroed, and input bias gradients computed by cudnn can be zeroed too.

@apaszke
Copy link
Contributor

apaszke commented Feb 11, 2017

I think that if skip_input is True, then we should never return input weights. It's going to be very counterintuitive otherwise.

@SeanNaren
Copy link
Author

@apaszke I think that would also solve the issue with the input bias as well, will get this in!

@SeanNaren
Copy link
Author

I'm having trouble making w_ih and b_ih optional, mainly because of the weights being positional arguments, e.g:

def RNNReLUCell(input, hidden, w_ih, w_hh, b_ih=None, b_hh=None, skip_input=False)

I could make them None, however this becomes tricky when the tensors are saved for the backward step in cuDNN, because non-tensors are filtered out via the _iter_tensors when saved here.

I'd prefer not to change the core function for saving tensors, but trying to support not allocating the input weight and bias may be trickier than expected. Any advice to approaching this?

@apaszke
Copy link
Contributor

apaszke commented Feb 12, 2017

We could just extend _iter_tensors to accept Nones. I don't know if that's a good idea right now, I'll have to look into it.

SeanNaren added 2 commits February 13, 2017 10:18
* Added temp changes to support missing whi

* Changes to add optional weights

* Added layer checks

* iter with nones

* Added None

* Added explicit check

* Added debug statements

* Removed messages

* Print

* Attempt to remove weights

* Added none check

* Final changes
@SeanNaren
Copy link
Author

Tests passed (but I needed to keep the unsqueeze operation in, otherwise I could an error in the expand), also I seem to have messed up the branch so will open a separate PR with the final changes if thats ok!

@apaszke
Copy link
Contributor

apaszke commented Mar 1, 2017

No, expand should be able to do unsqueeze for you. Maybe you haven't rebased on the right commit.

Anyway, I think we'll be merging the variable sequence length PR today, because we're going to be releasing new binaries. Can you please rebase on top of that?

@SeanNaren
Copy link
Author

Sounds good! Will wait for the merge then merge those in :)

@soumith
Copy link
Contributor

soumith commented Mar 1, 2017

it's merged now.

@SeanNaren
Copy link
Author

SeanNaren commented Mar 1, 2017

Added a new branch here however I'm still running into the issue of not being able to expand:

  File "/home/sean.narenthiran/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/rnn.py", line 24, in LSTMCell
    x_h = input.expand(input.size(0), 4, input.size(1)) if w_ih is None else F.linear(input, w_ih, b_ih)
  File "/home/sean.narenthiran/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py", line 658, in expand
    return Expand(sizes)(self)
  File "/home/sean.narenthiran/anaconda2/lib/python2.7/site-packages/torch/autograd/_functions/tensor.py", line 112, in forward
    result = i.expand(*self.sizes)
  File "/home/sean.narenthiran/anaconda2/lib/python2.7/site-packages/torch/tensor.py", line 258, in expand
    raise ValueError('incorrect size: only supporting singleton expansion (size=1)')
ValueError: incorrect size: only supporting singleton expansion (size=1)

@ngimel
Copy link
Collaborator

ngimel commented Mar 1, 2017

Expand only adds dimensions in the beginning, and you are trying to add dimension in the middle. You'd have first to input.view() and then expand.

@SeanNaren
Copy link
Author

@ngimel do you mean adding a singleton dim in the middle?

@ngimel
Copy link
Collaborator

ngimel commented Mar 1, 2017

Yes.

@SeanNaren
Copy link
Author

Any difference with me using an unsqueeze instead, seems like it would be the cleaner option?

@ngimel
Copy link
Collaborator

ngimel commented Mar 1, 2017

Sure, unsqueeze does the same thing.

@SeanNaren SeanNaren mentioned this pull request Mar 1, 2017
@SeanNaren
Copy link
Author

Closing this and continuing in #894

@SeanNaren SeanNaren closed this Mar 1, 2017
@SeanNaren SeanNaren deleted the skip_input branch June 8, 2017 13:02
mrshenli pushed a commit to mrshenli/pytorch that referenced this pull request Apr 11, 2020
ashishfarmer pushed a commit to ashishfarmer/pytorch that referenced this pull request May 21, 2020
mcarilli pushed a commit to mcarilli/pytorch that referenced this pull request Mar 18, 2021
1. enabling layer_norm_backward for wgrad/bgrad
2. fixing fusion segmentation to fill in lost tensor
KyleCZH pushed a commit to KyleCZH/pytorch that referenced this pull request Sep 20, 2021
slgong-fb added a commit to slgong-fb/pytorch that referenced this pull request Sep 22, 2022
Summary:
X-link: pytorch/kineto#658

Pull Request resolved: pytorch#85326

- add `set_withstack()`, overriding `ClientInterface`'s no-op funtion.
- revert `start()` and #ifdef

Test Plan:
- launch a python test case with the following command for on-demand flow:
echo -e "PYTHON_STACK_TRACE=true" > /tmp/scott_kineto.conf && dyno gputrace --gputrace_duration 300ms --gpuconf /tmp/scott_kineto.conf

Differential Revision: D39647074

fbshipit-source-id: a28ed2a9a981fd5cebb674e882d7f37738b3afa0
slgong-fb added a commit to slgong-fb/pytorch that referenced this pull request Sep 23, 2022
Summary:
X-link: pytorch/kineto#658

Pull Request resolved: pytorch#85326

- add `set_withstack()`, overriding `ClientInterface`'s no-op funtion.
- revert `start()` and #ifdef

Test Plan:
- launch a python test case with the following command for on-demand flow:
echo -e "PYTHON_STACK_TRACE=true" > /tmp/scott_kineto.conf && dyno gputrace --gputrace_duration 300ms --gpuconf /tmp/scott_kineto.conf

Differential Revision: D39647074

fbshipit-source-id: 44f38aeb4a6f80d3c69202e3a74ddbb6dacb8cd9
slgong-fb added a commit to slgong-fb/pytorch that referenced this pull request Sep 23, 2022
Summary:
X-link: pytorch/kineto#658

Pull Request resolved: pytorch#85326

- add `set_withstack()`, overriding `ClientInterface`'s no-op funtion.
- revert `start()` and #ifdef

Test Plan:
- launch a python test case with the following command for on-demand flow:
echo -e "PYTHON_STACK_TRACE=true" > /tmp/scott_kineto.conf && dyno gputrace --gputrace_duration 300ms --gpuconf /tmp/scott_kineto.conf

Differential Revision: D39647074

fbshipit-source-id: 7c19fad6e101a2e35e372a8b1a11122e84a17697
slgong-fb added a commit to slgong-fb/pytorch that referenced this pull request Sep 27, 2022
Summary:
X-link: pytorch/kineto#658

Pull Request resolved: pytorch#85326

- add `set_withstack()`, overriding `ClientInterface`'s no-op funtion.
- revert `start()` and #ifdef

Test Plan:
- launch a python test case with the following command for on-demand flow:
echo -e "PYTHON_STACK_TRACE=true" > /tmp/scott_kineto.conf && dyno gputrace --gputrace_duration 300ms --gpuconf /tmp/scott_kineto.conf

Reviewed By: chaekit

Differential Revision: D39647074

fbshipit-source-id: b00cbbb642217844fa200cda0465de04e205383c
akashveramd pushed a commit to akashveramd/pytorch that referenced this pull request Apr 9, 2025
akashveramd pushed a commit to akashveramd/pytorch that referenced this pull request Apr 9, 2025
* wmma_op + unit test

* add arch limitation to wmma test

* change arch limitation

* Refactor + Add all type unit test(int4 compile failed)

* Add f32_16x16x16_bf16 unit test

* tempsave

* tempsave

* tempsave

* runtime bug, cannot find symbol

* workaround for incorrect HIP warpSize return value

* debugging

* tempsave

* Correctness OK, waiting for optimization

* Tidy up + format

* temp save

* temp save, reproduce the v_bfi_b32 issue

* add inline asm for wmmaop test

* tidy up

* clean some debug purpose code

* discard some codes

* clang format

* clang format

* compiler issue fixed + increase tile size

* navi3x_multipleD+example

* temp save

* workable

* batchedgemm[OK], groupconv[debug]

* groupconv: Sanity check[OK], Performance[Bad]

* navi3x_groupconv_need_optimization

* create necessary files

* save progress

* Add Inter-Row thread transfer

* save progress

* save debugging progress

* sanity check pass

* fix a host tensor bug and clean up flash-attn code

* format

* cancel unnecessary change

* cancel unnecessary change

* cancel unnecessary change

* temp save, add asm backend flag to amd_wmma

* Mat-A LDS Bypass sanity pass

* temp save

* gemm sanity fix

* Porting new blockwise gemm to flash attention

* Example branch provide to compiler team

* tempsave

* Fix a bug

* batched gemm ported

* conv A-skip lds ported

* Skip B-Lds real gemm

* Skip B Lds Gemm + MulD

* batched gemm, conv, skip b lds

* format

* Attn, skip b lds

* Change GridwiseOp nam

* fix a typo caused bug

* Skip A_Lds sanity pass, Skip B_Lds scratch occured

* Bug found, intra-row permute off caused

* bug found

* a fix

* disable buffer load due to incorrect 3rd dword

* update fmha config, no scratch generated

* update 3rd dword

* fmha config update

* FMHA, add support to gfx1101/gfx1102

* Merge origin dev (pytorch#2)

* [Navi3x] Fix Gridwise_multiple_d operation (pytorch#649)

* Add CMake Option "USE_OPT_NAVI3X"

* fix bug

* standardize docs (pytorch#655)

* Separate bibtex requirement from rocm-docs-core (pytorch#656)

* separate bibtex requirement from rocm-docs-core

* point requirements to source rocm-docs-core repo

* Add CMake Option "USE_OPT_NAVI3X" (pytorch#647)

* Add CMake Option "USE_OPT_NAVI3X"

* remove navi3x opt compile option from cmake script

* Conv + quantization + tanh  (pytorch#645)

* Rename file. Prepare to support another activation

* Add comment for quantization

* Extract out_elementop

* Add tanh example

* Add conv + bias + tanh quantization instance

* Add missing parameter

* Refine cmake

* Add external api and client example

* Extract variable in example

* Fix the comment

---------

Co-authored-by: zjing14 <[email protected]>

* Add a denorm test fix (pytorch#603)

* Add type_convert implementations for bf16

* Add the fix for conv_fwd

* Add the fix for conv_bwd_data

* Add the fix for conv_bwd_weight

* Format

* Format

* Another format

* Add a macro to use workaround on MI200 only

* Format

---------

Co-authored-by: Rosty Geyyer <[email protected]>
Co-authored-by: zjing14 <[email protected]>

* simplify karg in device/grid of split-k op (pytorch#644)

* simplify karg in device/grid split-k op

* fix mk_kn_mn instances

* add more instances

* use name from tensor layout

* fix 3rd dword of buffer source descriptor (pytorch#659)

* add fp64 instances (pytorch#658)

Co-authored-by: root <[email protected]>

* Issue pytorch#666: Revert "simplify karg in device/grid of split-k op (pytorch#644)" (pytorch#665)

This reverts commit bb5530a.

* Groupnorm + swish external api (pytorch#668)

* Rename to proper naming

* Add example of groupnorm + swish

* Extract duplicate code in example

* Add groupnorm + swish instances

* Ractor instance generation, split into multiple cpp file

* Add external api and client example

* Refine profiler message

* Use ck math version of exp

* Refine problem size in example

* Add host version of exp

* add a marco to turn on/off denorm fix (off by default) (pytorch#673)

* add a marco to turn off denorm fix by default

* expose the marco

---------

Co-authored-by: root <[email protected]>

* fixed quant example (pytorch#672)

Co-authored-by: root <[email protected]>

* Add dependabot config and pin rocm-docs-core (pytorch#663)

* [gtest] suppress unsafe buffer warn (pytorch#670)

ref: ROCm/MIOpen#1912

* Add memory index guard in wmma device ops (pytorch#667)

* Add more macros to turn on/off denorm fix (pytorch#678)

Co-authored-by: Rosty Geyyer <[email protected]>

* Fix a typo (pytorch#676)

* Add (pytorch#677)

* Allow using ROCm release candidate compilers. (pytorch#679)

* enable use of rocm5.5 release candidate 4

* upgrade to ROCM5.5 RC5

* try fix the PUB_KEY error, remove the cmake-data package

* upgrade to latest cmake version

* use private dockerhub repo for rocm5.5 rc5

* add missing bracket

* add vector load check

* solve conflicts

---------

Co-authored-by: Sam Wu <[email protected]>
Co-authored-by: Sam Wu <[email protected]>
Co-authored-by: rocking5566 <[email protected]>
Co-authored-by: zjing14 <[email protected]>
Co-authored-by: Rostyslav Geyyer <[email protected]>
Co-authored-by: Rosty Geyyer <[email protected]>
Co-authored-by: carlushuang <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Jun Liu <[email protected]>
Co-authored-by: Illia Silin <[email protected]>

* Disable SkipLDS & Align AIT api (pytorch#3)

* fix layernorm, reduction Ops (pytorch#4)

* [Navi3x] Fix Gridwise_multiple_d operation (pytorch#649)

* Add CMake Option "USE_OPT_NAVI3X"

* fix bug

* standardize docs (pytorch#655)

* Separate bibtex requirement from rocm-docs-core (pytorch#656)

* separate bibtex requirement from rocm-docs-core

* point requirements to source rocm-docs-core repo

* Add CMake Option "USE_OPT_NAVI3X" (pytorch#647)

* Add CMake Option "USE_OPT_NAVI3X"

* remove navi3x opt compile option from cmake script

* Conv + quantization + tanh  (pytorch#645)

* Rename file. Prepare to support another activation

* Add comment for quantization

* Extract out_elementop

* Add tanh example

* Add conv + bias + tanh quantization instance

* Add missing parameter

* Refine cmake

* Add external api and client example

* Extract variable in example

* Fix the comment

---------

Co-authored-by: zjing14 <[email protected]>

* Add a denorm test fix (pytorch#603)

* Add type_convert implementations for bf16

* Add the fix for conv_fwd

* Add the fix for conv_bwd_data

* Add the fix for conv_bwd_weight

* Format

* Format

* Another format

* Add a macro to use workaround on MI200 only

* Format

---------

Co-authored-by: Rosty Geyyer <[email protected]>
Co-authored-by: zjing14 <[email protected]>

* simplify karg in device/grid of split-k op (pytorch#644)

* simplify karg in device/grid split-k op

* fix mk_kn_mn instances

* add more instances

* use name from tensor layout

* fix 3rd dword of buffer source descriptor (pytorch#659)

* add fp64 instances (pytorch#658)

Co-authored-by: root <[email protected]>

* Issue pytorch#666: Revert "simplify karg in device/grid of split-k op (pytorch#644)" (pytorch#665)

This reverts commit bb5530a.

* Groupnorm + swish external api (pytorch#668)

* Rename to proper naming

* Add example of groupnorm + swish

* Extract duplicate code in example

* Add groupnorm + swish instances

* Ractor instance generation, split into multiple cpp file

* Add external api and client example

* Refine profiler message

* Use ck math version of exp

* Refine problem size in example

* Add host version of exp

* add a marco to turn on/off denorm fix (off by default) (pytorch#673)

* add a marco to turn off denorm fix by default

* expose the marco

---------

Co-authored-by: root <[email protected]>

* fixed quant example (pytorch#672)

Co-authored-by: root <[email protected]>

* Add dependabot config and pin rocm-docs-core (pytorch#663)

* [gtest] suppress unsafe buffer warn (pytorch#670)

ref: ROCm/MIOpen#1912

* Add memory index guard in wmma device ops (pytorch#667)

* Add more macros to turn on/off denorm fix (pytorch#678)

Co-authored-by: Rosty Geyyer <[email protected]>

* Fix a typo (pytorch#676)

* Add (pytorch#677)

* Allow using ROCm release candidate compilers. (pytorch#679)

* enable use of rocm5.5 release candidate 4

* upgrade to ROCM5.5 RC5

* try fix the PUB_KEY error, remove the cmake-data package

* upgrade to latest cmake version

* use private dockerhub repo for rocm5.5 rc5

* add missing bracket

* Disable SkipLDS & Align AIT api

* Update dependabot config (pytorch#682)

Co-authored-by: samjwu <[email protected]>

* update attn api

* solve type_convert bug + enable

---------

Co-authored-by: Sam Wu <[email protected]>
Co-authored-by: Sam Wu <[email protected]>
Co-authored-by: rocking5566 <[email protected]>
Co-authored-by: zjing14 <[email protected]>
Co-authored-by: Rostyslav Geyyer <[email protected]>
Co-authored-by: Rosty Geyyer <[email protected]>
Co-authored-by: carlushuang <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Jun Liu <[email protected]>
Co-authored-by: Illia Silin <[email protected]>
Co-authored-by: samjwu <[email protected]>
Co-authored-by: haocwang <[email protected]>

* fix typo

* Fix attention with causal mask

* multiple fix, try ait compile

* Add A/B not use LDS pipeline

* Clang format, Add gfx1101, gfx1102 support of FMHA example

* cancel change of format script

* 1. Enable 2-stage global Prefetch ( May cause VGPR spilling)
2. Enable FP16 accumulator blockwise_gemm

* clang-format

* 1. change blockwise gemm loopover direction from kmn to mnk ( ~1% improvement)
2. change kernel timing mode to 50 warmup + 50 timed repeat

* Update low level abstration of blockwise gemm wmma

* (2/5) bilinear gemm pass, perf bug: skip a lds has lower performance than skip b lds

* (3/5) batched gemm pass, perf bug: skip a lds has lower performance than skip b lds

* (4/5) grouped conv pass

* (5/5) attention pass, todo: debug lds perf bug

* AIT Attention API refactor (pytorch#8)

* sanity pass

* sanity pass 2

* confirm significant performance regression.

* turn on all instances

* turn off instance format

* Fix bug & tunning & format

* DML meta, self_attn+cross_attn

* sanity pass

* remove useless flag

* update tile and problem size used in AIT attention

* bug fix in grouped conv supporting check

* deprecate inline asm wmma

* Bug fix: double lds skip

* clang-format

* Fix errors in
1. example, fmha
2. gridwise pipeline
3. deviceop, fmha, change some containers from vector to array

* part2 of previous commit

* clang format

* API fix of gridwisegemmpipeline

* separate array base and vector base attention tensor transformation

* fix gemm

* clang format

* add gemm fp16 instances

* Temp save

* fpAintB kernel compile pass

* Sanity pass.

* Temp save

* debug code enabled

* Fp16AInt8B_GEMM sanity

* MQA implementation

* GQA-4 example

* tempsave

* Compile pass

* New implementation of fp16Aint8B Gemm, Acheieve similar math throughput with native fp16 Gemm

* format

* Todo: fix gemm_bilinear_wmma instances compilation bug

* Solve a bug when K1=16

* remove unnecessary changes

* Remove tensor layout limitation to LDS usage in tesnor contraction

* update self-attention and cross-attention

* fix a typo of name

* Add arch limiter for fp8 gemm

* enable fp8 gemm_xdl for all gfx9 targets

* temporarily disable gemm_xdl_fp16_fp8 on MI100/200

* fix the cmake logic for gemm_xdl_fp16_fp8

* re-enable the gemm_xdl_fp16_fp8 on MI100/200

---------

Co-authored-by: aska-0096 <[email protected]>
Co-authored-by: Sam Wu <[email protected]>
Co-authored-by: Sam Wu <[email protected]>
Co-authored-by: rocking5566 <[email protected]>
Co-authored-by: Rostyslav Geyyer <[email protected]>
Co-authored-by: Rosty Geyyer <[email protected]>
Co-authored-by: carlushuang <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Jun Liu <[email protected]>
Co-authored-by: Illia Silin <[email protected]>
Co-authored-by: samjwu <[email protected]>
Co-authored-by: haocwang <[email protected]>
Co-authored-by: illsilin <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants