Skip to content

Conversation

@fajin-corp
Copy link
Contributor

@fajin-corp fajin-corp commented Oct 30, 2024

Description

Add Fp16 kernels for MatMulNBits.
Support Fp16 A calculate using accuracy 2.

BlkLen:128/Symmetric:0/HasBias:1

Thread M N K Fp32 Time Fp16 Time Fp16 latency reduction
1 1 4096 3072 5086301 ns 2912092 ns 42.7%
1 1 4096 11008 17866090 ns 10989713 ns 38.5%
1 1 11008 3072 13763608 ns 7844626 ns 43.0%
1 4096 4096 3072 2843439224 ns 1954152587 ns 31.3%
8 1 4096 3072 627008 ns 371404 ns 40.8%
8 1 4096 11008 2229758 ns 1370499 ns 38.5%
8 1 11008 3072 1713451 ns 1008165 ns 41.2%
8 4096 4096 3072 374325569 ns 250992166 ns 32.9%

Motivation and Context

Add cross-device data type support.

@fajin-corp fajin-corp requested a review from a team as a code owner October 30, 2024 01:16
@amarin16
Copy link
Contributor

amarin16 commented Oct 30, 2024

There seem to be conflicts in matmul_nbits.cc and matmul_4bits_test.cc #Resolved

@fajin-corp fajin-corp force-pushed the fajin/mmnbfp16armsimd branch from 3e095fc to 98b1e5f Compare October 30, 2024 18:24
Copy link
Contributor

@edgchen1 edgchen1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

initial review

return CompInt8;
}
// Fallback to fp16. If fp16 optimized path is not available, it will further fall back to fp32.
return CompFp16;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so this will return CompFp16 even if accuracy_level_attr is CompFp32?

Copy link
Contributor Author

@fajin-corp fajin-corp Oct 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see a point of using CompFp32 for fp16 input if CompFp16 is available. converting fp16 to fp32 does not bring more precision, and the casting only makes the performance worse.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that it doesn't make sense for fp16 input. for fp16 input, what do you think about treating the default accuracy level value (unset) as CompFp16 and treating an explicit accuracy level of CompFp32 as an error?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if accuracy 1 is given for fp16 input, maybe show a warning and use compFp16?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, warning is good too

@fajin-corp
Copy link
Contributor Author

resolved


In reply to: 2445691891

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can commit the suggested changes from lintrunner.

}
}

void SQ4BitGemm_CompInt8(

Check warning

Code scanning / CodeQL

Poorly documented large function

Poorly documented function: fewer than 2% comments for a function of 127 lines.
Comment on lines +750 to +753
switch (variant) {
case SQNBitGemmVariant_BitWidth4_CompInt8:
return InitializeWorkspace_CompInt8<float>;
default:
return nullptr;
}

Check notice

Code scanning / CodeQL

No trivial switch statements

This switch statement should either handle more cases, or be rewritten as an if statement.
Comment on lines +762 to +765
switch (variant) {
case HQNBitGemmVariant_BitWidth4_CompInt8:
return InitializeWorkspace_CompInt8<MLAS_FP16>;
default:
return nullptr;
}

Check notice

Code scanning / CodeQL

No trivial switch statements

This switch statement should either handle more cases, or be rewritten as an if statement.
Comment on lines +804 to +807
switch (variant) {
case HQNBitGemmVariant_BitWidth4_CompFp16:
return HQ4BitGemm_CompFp16;
default:
return nullptr;
}

Check notice

Code scanning / CodeQL

No trivial switch statements

This switch statement should either handle more cases, or be rewritten as an if statement.
PackedQuantBData = reinterpret_cast<std::byte*>(MlasAlignAddress(PackedQuantBWorkspace, 32));
QuantBBlkSum = reinterpret_cast<T*>(PackedQuantBData + PackedQuantBDataSize);
QuantBBlkSum = reinterpret_cast<T*>(MlasAlignAddress(QuantBBlkSum, MlasQNBitQuantBBlkSumAlignment()));
PackedQuantBScale = reinterpret_cast<T*>(reinterpret_cast<std::byte*>(QuantBBlkSum) + BlkSumSize);

Check failure

Code scanning / CodeQL

Suspicious pointer scaling

This pointer might have type [float](1) (size 4), but this pointer arithmetic is done with type byte * (size 1). This pointer might have type [MLFloat16](2) (size 2), but this pointer arithmetic is done with type byte * (size 1).
Copy link
Contributor

@edgchen1 edgchen1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good. had a few comments.


template <typename ElementType>
std::vector<ElementType> RandomVectorUniform(
typename std::enable_if_t<!std::is_same_v<ElementType, MLAS_FP16>, std::vector<ElementType>>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: would it be simpler to have a specialization for MLAS_FP16 instead of two enable_ifs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since this is a .h file included in multiple .cpp files, using specialization will trigger redefinition. so I chose to use enable_if

@fajin-corp fajin-corp force-pushed the fajin/mmnbfp16armsimd branch from 57ef96b to 037db3f Compare November 8, 2024 22:40
@fajin-corp fajin-corp force-pushed the fajin/mmnbfp16armsimd branch from 037db3f to 8050f0a Compare November 8, 2024 23:51
fajin-corp added a commit that referenced this pull request Nov 12, 2024
### Description
A break down PR of #22651
Add fp16 kernels.



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
fajin-corp added a commit that referenced this pull request Nov 14, 2024
### Description
A break-down PR of #22651
Op API change only.
- add template to functions and classes that support fp32 and fp16
- rename functions, classes and files that support fp32 and fp16 from
SQNBxxx to QNBxxx


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
fajin-corp added a commit that referenced this pull request Nov 15, 2024
### Description
A breakdown PR of #22651



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
@fajin-corp fajin-corp closed this Nov 15, 2024
guschmue pushed a commit that referenced this pull request Dec 2, 2024
### Description
A break down PR of #22651
Add fp16 kernels.



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
guschmue pushed a commit that referenced this pull request Dec 2, 2024
### Description
A break-down PR of #22651
Op API change only.
- add template to functions and classes that support fp32 and fp16
- rename functions, classes and files that support fp32 and fp16 from
SQNBxxx to QNBxxx


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
guschmue pushed a commit that referenced this pull request Dec 2, 2024
### Description
A breakdown PR of #22651



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
### Description
A break down PR of microsoft#22651
Add fp16 kernels.



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
### Description
A break-down PR of microsoft#22651
Op API change only.
- add template to functions and classes that support fp32 and fp16
- rename functions, classes and files that support fp32 and fp16 from
SQNBxxx to QNBxxx


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
### Description
A breakdown PR of microsoft#22651



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
### Description
A break down PR of microsoft#22651
Add fp16 kernels.



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
### Description
A break-down PR of microsoft#22651
Op API change only.
- add template to functions and classes that support fp32 and fp16
- rename functions, classes and files that support fp32 and fp16 from
SQNBxxx to QNBxxx


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
### Description
A breakdown PR of microsoft#22651



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
### Description
A break down PR of microsoft#22651
Add fp16 kernels.



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
### Description
A break-down PR of microsoft#22651
Op API change only.
- add template to functions and classes that support fp32 and fp16
- rename functions, classes and files that support fp32 and fp16 from
SQNBxxx to QNBxxx


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
### Description
A breakdown PR of microsoft#22651



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
alex-spacemit pushed a commit to spacemit-com/onnxruntime that referenced this pull request Jun 22, 2025
[ARM] MatMulNBits FP16 support - kernels only (microsoft#22806)

A break down PR of microsoft#22651
Add fp16 kernels.

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Revert Implement DML copy for Lora Adapters (microsoft#22814)

Revert microsoft#22396

Fix issue microsoft#22796 - a typo: (__GNUC__ > 9) -> (__GNUC__ > 10) (microsoft#22807)

fix microsoft#22796
Signed-off-by: liqunfu <[email protected]>

[js/webgpu] Add scatterND (microsoft#22755)

<!-- Describe your changes. -->

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

[WebNN] Remove validation for coordinate_transformation_mode (microsoft#22811)

The performance cost of falling back to the CPU EP is high for several
resampling nodes and causes multiple partitions in SD Turbo and VAE
decoder. Since the asymmetric mode with nearest to floor and integer
scales is identical to half_pixel anyway, stick with the WebNN EP.

[TensorRT EP] Add new provider option to exclude nodes from running on TRT (microsoft#22681)

Add new provider option `trt_op_types_to_exclude`:
- User can provide op type list to be excluded from running on TRT
- e.g. `trt_op_types_to_exclude="MaxPool"`

There is a known performance issue with the DDS ops (NonMaxSuppression,
NonZero and RoiAlign) from TRT versions 10.0 to 10.7. TRT EP excludes
DDS ops from running on TRT by default, user can override default value
with empty string to include all ops.

Keep the model metadata on the generated EP context model (microsoft#22825)

Keep the model metadata on the generated EP context model

[WebNN EP] Fix issues of GRU operator (microsoft#22123)

This PR fixes the spelling of the key value of the GRU operator in the
map in the `GetSupportedNodes` function (Gru -> GRU) and removes the
data type check for the fifth input (sequence_lens) of the GRU operator.

PTAL, thanks!

Auto-generated baselines by 1ES Pipeline Templates (microsoft#22817)

Fix Linux python CUDA package pipeline (microsoft#22803)

Making ::p optional in the Linux python CUDA package pipeline

Linux stage from Python-CUDA-Packaging-Pipeline has failed since merge
of microsoft#22773

[WebNN] Fix MLTensorUsage is undefined issue (microsoft#22831)

`MLTensorUsage` has been removed from Chromium:
https://chromium-review.googlesource.com/c/chromium/src/+/6015318, but
we still need to make it compatible with old Chrome versions, so just
make it `undefined` for latest Chrome version.

Enable ConvReplaceWithQLinear when using ACL (microsoft#22823)

Enable the ConvReplaceWithQLinear graph optimization when using the ACL
execution provider.

Fixes an issue where quantized Conv nodes followed by ReLU don't get
converted to QLinearConv, so ACL sees the weights as mutable and
therefore cannot run the Conv node.

Signed-off-by: Michael Tyler <[email protected]>

[CUDA] stable diffusion benchmark allows IO binding for optimum (microsoft#22834)

Update stable diffusion benchmark:
(1) allow IO binding for optimum.
(2) do not use num_images_per_prompt across all engines for fair
comparison.

Example to run benchmark of optimum on stable diffusion 1.5:
```
git clone https://github.com/tianleiwu/optimum
cd optimum
git checkout tlwu/diffusers-io-binding
pip install -e .

pip install -U onnxruntime-gpu
git clone https://github.com/microsoft/onnxruntime
cd onnxruntime/onnxruntime/python/tools/transformers/models/stable_diffusion
git checkout tlwu/benchmark_sd_optimum_io_binding
pip install -r requirements/cuda12/requirements.txt

optimum-cli export onnx --model runwayml/stable-diffusion-v1-5  --task text-to-image ./sd_onnx_fp32

python optimize_pipeline.py -i ./sd_onnx_fp32 -o ./sd_onnx_fp16 --float16
python benchmark.py -e optimum -r cuda -v 1.5 -p ./sd_onnx_fp16
python benchmark.py -e optimum -r cuda -v 1.5 -p ./sd_onnx_fp16 --use_io_binding
```

Example output in H100_80GB_HBM3: 572 ms with IO Binding; 588 ms without
IO Binding; IO binding gains 16ms, or 2.7%,

Optimum is working on enabling I/O binding:
huggingface/optimum#2056. This could help
testing the impact of I/O binding on the performance of the stable
diffusion.

Fix Linux CI pipeline where ep was not provided for py-packaging-linux-test-cpu.yml (microsoft#22828)

Current linux-ci-pipeline was broken due to missing parameters from
`py-packaging-linux-test-cpu.yml` template

Fix Linux CI pipeline

Register groupnorm for opset 21 (microsoft#22830)

This PR registers GroupNormalization for opset 21

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Fix spellchecks from Optional Lint (microsoft#22802)

<!-- Describe your changes. -->

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Change-Id: I561dfcdadcc6fa4cda899ef3bb181f0713fadebb
alex-spacemit pushed a commit to spacemit-com/onnxruntime that referenced this pull request Jun 22, 2025
[ARM] MatMulNBits Fp16 support - API change only (microsoft#22826)

A break-down PR of microsoft#22651
Op API change only.
- add template to functions and classes that support fp32 and fp16
- rename functions, classes and files that support fp32 and fp16 from
SQNBxxx to QNBxxx

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Change-Id: Ib489e7858d42abcbe0514ac44e4d2172e32384a3

Re-enable test symbolic shape infer (microsoft#22737)

<!-- Describe your changes. -->
It seems after CI updated to py310, numpy got updated to 2.0 and sympy
1.2 failed to cast float numpy array.
Pointing sympy to 1.13 when py>=3.9 and re-enable unit test

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Error: Linux CPU
CI

[Quant tool] Handle input models with pre-quantized weights (microsoft#22633)

Allows the QDQ quantizer to handle input models that already have some
pre-quantized weights. In this case, the qdq quantizer will properly
skip/handle the pre-quantized weights.

Also handles an operator (e.g., Conv) with a pre-quantized weight and a
float bias. The tool will read the pre-quantized weight's quantization
scale to compute the bias's scale (`bias_scale = input_scale *
weight_scale`).

Input model (pre-quantized Conv weight):

![image](https://github.com/user-attachments/assets/7d2626e4-49ad-47ae-bd0e-6339ac590435)

Output QDQ model (everything is quantized):

![image](https://github.com/user-attachments/assets/393804d3-f042-47bd-895f-3d667fb2ae94)

Customers may use external tools to quantize some weights (e.g., int4
for Conv/MatMul). The qdq quantizer should still be able to quantize the
rest of the model (float weights and activations) in this case.

Update Gradle version 8.7 and java version 17 within onnxruntime/java (microsoft#22771)

This change is to update the Gradle version within java project to 8.7,
it also upgrades the JAVA to 17. Gradle version from react-native was
also updated to 7.5 to make it compatible with changes from the Java
directory. However, the target java version remains the same. Java
version from these will be upgraded in a separated PR.

This is spited from microsoft#22206

This is the first step to upgrade the react native version.

Ovep develop 1.21 (microsoft#22824)

OVEP development changes for ORT 1.21 Release

Has critical bug fixes
Support for concurrency execution of models is enabled
Support for OV 2024.5
Memory optimizations for NPU platform

---------

Co-authored-by: jatinwadhwa921 <[email protected]>
Co-authored-by: Ankit Maheshkar <[email protected]>
Co-authored-by: sfatimar <[email protected]>
Co-authored-by: saurabhkale17 <[email protected]>
Co-authored-by: TejalKhade28 <[email protected]>
Co-authored-by: Javier E. Martinez <[email protected]>

Fix 1.20 cuda minimal build failure (microsoft#22751)

Fixes build failure for the cuda minimal build

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
[This change](microsoft#19470) in
1.20 is causing build failures for the cuda minimal build.
Essentially, some cudnn logic was not guarded by the `USE_CUDA_MINIMAL`.
Also the build is looking for cudnn while in the cuda minimal build it
shouldn't depend on it, resulting in linking error.

cc @gedoensmax @chilo-ms

[ARM] MatMulNBits fp16 support - connect kernels (microsoft#22856)

A breakdown PR of microsoft#22651

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Change-Id: I3014c1002ff375a507bc04de7756baacf9a2b77a

[WebNN EP] Support Einsum op (microsoft#19558)

Adds support for einsum via WebNN matmul, transpose, reshape, reducesum,
identity and element-wise binary ops.

Refactor SkipLayerNorm and handle beta properly (microsoft#22862)

Signed-off-by: Liqun Fu <[email protected]>
Signed-off-by: Liqun Fu <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Change-Id: Ic5b8a6eb775542a57f07f5e593cc399dd7eeaa8f

Fix CUDA/DML package exception caused by ENABLE_CUDA_NHWC_OPS (microsoft#22851)

Now,  ENABLE_CUDA_NHWC_OPS is enabled by default.
It adds a new chance to create cuda provider while both cuda/dml are
enabled

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Optimize Transpose around QLinearSoftmax (microsoft#22849)

<!-- Describe your changes. -->

- Improved Transpose around QLinearSoftmax in Level 3 NHWC Transformer.
- Removed redundant code HandleQLinearConcat, HandleQLinearBinaryOp.

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

By merging and eliminating redundant transpose , the Image Segmentation
i8 model (MobileNetv2 + DeepLabv3) achieves a 2.34X speedup.

Replace INFINITY by std::numeric_limits<float>::infinity() (microsoft#22868)

Replace INFINITY by `std::numeric_limits<float>::infinity()` to avoid
build errors with Visual Studio 2022 v17.12 Preview 5

microsoft#22728

[js/webgpu] Optimize transpose as reshape when suitable (microsoft#22870)

BUG microsoft#22031

Change-Id: I6c70d84228f1563792218c6c3c18b023852d4147

clang format code

Change-Id: I422a9474da9e9cfc9ac8819569a13520c5d2641f
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants