I am running onnxruntime-gpu 0.19 in jetson AGX orin dev kit with jetpack 6.2 CuDNN 9. However onnxruntime-gpu 0.19 only supports cudnn8 hence i am facing huge errors. This part is critical to my project. Kindly release onnxruntime-gpu 1.20 for aarch64 because it supports cudnn9
Hi,
You can find the onnxruntime-gpu package for JetPack 6.2. in the below link:
Thanks.
After installing jp6/cu126 onnxruntime-gpu 1.23.0,
sometimes code runs smoothly with warning:
onnxruntime cpuid_info warning: Unknown CPU vendor. cpuinfo_vendor value: 0
2025-10-07 17:39:45.428567886 [W:onnxruntime:Default, device_discovery.cc:164 DiscoverDevicesForPlatform] GPU device discovery failed: device_discovery.cc:89 ReadFileContents Failed to open file: “/sys/class/drm/card1/device/vendor”
sometimes crashes with error:
onnxruntime cpuid_info warning: Unknown CPU vendor. cpuinfo_vendor value: 0
python: malloc.c:2617: sysmalloc: Assertion `(old_top == initial_top (av) && old_size == 0) || ((unsigned long) (old_size) >= MINSIZE && prev_inuse (old_top) && ((unsigned long) old_end & (pagesize - 1)) == 0)’ failed.
Aborted (core dumped)
Is there any reason you don’t use TensorRT, as opposed to using onnxruntime?
I have to run the pth, onnx, trt models on both cpu and gpu to show a comparison for execution time to professors and project team. The end goal is always .trt
Hi,
Do you have a sample that can reproduce the assert so we can check it further on our side?
Thanks.
writing python in terminal causes this error, even if i am not importing onnxruntime. pip uninstall onnxruntime-gpu resolves the issue
Hi,
We didn’t meet a similar issue before.
But according to the topic below, the issue might come from the ONNX model itself.
Could you verify the ONNX functionality first?
Ex: inference with CPU or inference with GPU on a desktop environment?
Thanks.
You could try building from source.
git clone --recursive https://github.com/microsoft/onnxruntime.git
cd onnxruntime
# pick your version, most recent is v1.23.1. https://github.com/microsoft/onnxruntime/releases
git checkout v1.23.1
git submodule update --init --recursive
./build.sh --config Release --update --build --parallel 8 \
--cmake_generator Ninja --skip_tests \
--enable_pybind --build_wheel \
--use_cuda \
--cuda_home /usr/local/cuda \
--cudnn_home /usr/lib/aarch64-linux-gnu \
--use_tensorrt \
--tensorrt_home /usr/lib/aarch64-linux-gnu \
--cmake_extra_defines CMAKE_CUDA_ARCHITECTURES=87
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.