[WebNN EP] Fix AddInitializersToSkip issues#23354
Merged
fdwr merged 2 commits intomicrosoft:mainfrom Jan 18, 2025
Merged
Conversation
Contributor
Contributor
|
/azp run ONNX Runtime Web CI Pipeline,Windows GPU CI Pipeline,Linux Android Emulator QNN CI Pipeline |
Contributor
|
/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline |
Contributor
|
/azp run Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI |
Contributor
|
/azp run Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline,Big Models |
|
Azure Pipelines successfully started running 2 pipeline(s). |
|
Azure Pipelines successfully started running 4 pipeline(s). |
1 similar comment
|
Azure Pipelines successfully started running 4 pipeline(s). |
|
Azure Pipelines successfully started running 9 pipeline(s). |
ashrit-ms
pushed a commit
that referenced
this pull request
Jan 23, 2025
### Description <!-- Describe your changes. --> When the onnx model reuses initializers in more than one ops, if one of the ops wants to add this initializer to the skipped list, but the other ops still need this initializer, it will cause the process to crash. Therefore, like other EPs, we count `initializer_usage_`, the number of occurrences of each initializer in all ops and modify the `AddInitializersToSkip` to minus the corresponding initializers' statistic one when adding the specific operators. ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. -->
ashrit-ms
added a commit
that referenced
this pull request
Jan 23, 2025
### Description This PR is to update the win-ort-main branch to the tip main branch as of 2025-01-23. ### PR List ddf0d37 [QNN EP] Add LoggingManager::HasDefaultLogger() to provider bridge API (#23467) 05fbbdf [QNN EP] Make QNN EP a shared library (#23120) 1336566 Add custom vcpkg ports (#23456) 2e1173c Update the compile flags for vcpkg packages (#23455) 1f628a9 [Mobile] Add BrowserStack Android MAUI Test (#23383) 009cae0 [js/webgpu] Optimize ConvTranspose (Continue) (#23429) 04a4a69 Use onnx_protobuf.h to suppress some GCC warnings (#23453) 2e3b62b Suppress some strict-aliasing related warnings in WebGPU EP (#23454) b708f9b Bump ruff from 0.9.1 to 0.9.2 (#23427) c0afc66 [WebNN] Remove workarounds for TFLite backend (#23406) 8a821ff Bump vite from 6.0.7 to 6.0.11 in /js/web/test/e2e/exports/testcases/vite-default (#23446) 220c1a2 Make ORT and Dawn use the same protobuf/abseil source code (#23447) b7b5792 Change MacOS-13 to ubuntu on for android-java-api-aar-test.yml. (#23444) 19d0d2a WIP: Dp4MatMulNBits accuracy level 4 matmul for WebGPU EP (#23365) 95b8eff [QNN EP]: Clean up QNN logging resources if an error occurs during initialization (#23435) 626134c Bump clang-format from 19.1.6 to 19.1.7 (#23428) 0cf9753 Fix eigen external deps (#23439) f9440ae Moving RN_CI Android Testing to Linux (#23422) 1aa5902 [QNN EP] workaround for QNN validation bug for Tanh with uint16 quantized output (#23432) 7f5582a Seperate RN andriod and IOS into 2 separated Stages. (#23400) 73deac2 Implement some missing element wise Add/Sub/Mul/Div/Neg operations for CPU and CUDA EPs (#23090) 949fe42 Upgrade Java version from react-native/android to Java 17 (#23066) 0892c23 Update Qnn SDK default version to 2.30 (#23411) 94c099b Fix type cast build error (#23423) d633e57 [WebNN EP] Fix AddInitializersToSkip issues (#23354) e988ef0 [QNN EP] Fix regression for MatMul with two quantized/dynamic uint16 inputs (#23419) 7538795 Update onnxruntime binary size checks ci pipeline's docker image (#23405) 6c5ea41 Revert "[QNN EP] Clean up correctly from a partial setup (#23320)" (#23420) e866804 Enable comprehension simplification in ruff rules (#23414) 0a5f1f3 bugfix: string_view of invalid memory (#23417) 4cc38e0 fix crash when first input of BatchNormalization is 1-D (#23387) 0334414 Target py310 and modernize codebase with ruff (#23401) 87341ac [QNN EP] Fix segfault when unregistering HTP shared memory handles (#23402) ### Motivation and Context This update includes the change to make QNN-EP a shared library. --------- Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: Adrian Lizarraga <[email protected]> Co-authored-by: Justin Chu <[email protected]> Co-authored-by: Yulong Wang <[email protected]> Co-authored-by: Edward Chen <[email protected]> Co-authored-by: Changming Sun <[email protected]> Co-authored-by: Peishen Yan <[email protected]> Co-authored-by: Tianlei Wu <[email protected]> Co-authored-by: Hector Li <[email protected]> Co-authored-by: Jian Chen <[email protected]> Co-authored-by: Alexis Tsogias <[email protected]> Co-authored-by: junchao-zhao <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: sushraja-msft <[email protected]> Co-authored-by: Wanming Lin <[email protected]> Co-authored-by: Jiajia Qin <[email protected]> Co-authored-by: Caroline Zhu <[email protected]>
guschmue
pushed a commit
that referenced
this pull request
Mar 6, 2025
### Description <!-- Describe your changes. --> When the onnx model reuses initializers in more than one ops, if one of the ops wants to add this initializer to the skipped list, but the other ops still need this initializer, it will cause the process to crash. Therefore, like other EPs, we count `initializer_usage_`, the number of occurrences of each initializer in all ops and modify the `AddInitializersToSkip` to minus the corresponding initializers' statistic one when adding the specific operators. ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. -->
ashrit-ms
pushed a commit
that referenced
this pull request
Mar 17, 2025
### Description <!-- Describe your changes. --> When the onnx model reuses initializers in more than one ops, if one of the ops wants to add this initializer to the skipped list, but the other ops still need this initializer, it will cause the process to crash. Therefore, like other EPs, we count `initializer_usage_`, the number of occurrences of each initializer in all ops and modify the `AddInitializersToSkip` to minus the corresponding initializers' statistic one when adding the specific operators. ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. -->
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
When the onnx model reuses initializers in more than one ops, if one of the ops wants to add this initializer to the skipped list, but the other ops still need this initializer, it will cause the process to crash.
Therefore, like other EPs, we count
initializer_usage_, the number of occurrences of each initializer in all ops and modify theAddInitializersToSkipto minus the corresponding initializers' statistic one when adding the specific operators.Motivation and Context