[Fix] CPU memory type need device_id=0 to get allocator#14050
Merged
PeixuanZuo merged 1 commit intomainfrom Dec 26, 2022
Merged
[Fix] CPU memory type need device_id=0 to get allocator#14050PeixuanZuo merged 1 commit intomainfrom
PeixuanZuo merged 1 commit intomainfrom
Conversation
pengwa
approved these changes
Dec 22, 2022
simon-moo
pushed a commit
to simon-moo/onnxruntime
that referenced
this pull request
Dec 26, 2022
) ### Description <!-- Describe your changes. --> Fix an error: `onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /onnxruntime_src/onnxruntime/core/framework/allocation_planner.cc:819 onnxruntime::common::Status onnxruntime::PlannerImpl::ComputeValueLocation() allocator was false.` This error happens when we run huggingface models with DDP on multi-GPUs. In a thread with rank>0, it will attempt to obtain a CPU memory allocator with device_id>0, which causes the error. There is a workaround judges whether node’s output is on the CPU or not. If the output is on CPU, we set device_id = 0. Co-authored-by: peixuanzuo <peixuanzuo@linmif39a000004.zvflicr54joexhdgnhvmxrxygg.phxx.internal.cloudapp.net>
pengwa
added a commit
that referenced
this pull request
May 9, 2023
### Add CPU allocation test for non-CPU devices distributed run When CUDA EP is enabled in distributed training, CPU memory is still used for some node output. Early we have distributed run test coverage, but don't cover the case when some of the node are using CPU devices for storing tensor output. As a result, I recalled we hit regression twice in the passing months: - #14050 - #15823 So adding this test to avoid future regressions. The test graph looks like this:  ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. -->
prathikr
pushed a commit
that referenced
this pull request
May 16, 2023
### Add CPU allocation test for non-CPU devices distributed run When CUDA EP is enabled in distributed training, CPU memory is still used for some node output. Early we have distributed run test coverage, but don't cover the case when some of the node are using CPU devices for storing tensor output. As a result, I recalled we hit regression twice in the passing months: - #14050 - #15823 So adding this test to avoid future regressions. The test graph looks like this:  ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. -->
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Fix an error:
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /onnxruntime_src/onnxruntime/core/framework/allocation_planner.cc:819 onnxruntime::common::Status onnxruntime::PlannerImpl::ComputeValueLocation() allocator was false.This error happens when we run huggingface models with DDP on multi-GPUs. In a thread with rank>0, it will attempt to obtain a CPU memory allocator with device_id>0, which causes the error. There is a workaround judges whether node’s output is on the CPU or not. If the output is on CPU, we set device_id = 0.