Jetson Nano TensorRT Engine INT8

Hi everyone,

Is it possible to do INT8 calibration via TensorRT engines to run YOLO models on the Jetson Nano 2 GB, or does the GPU not have the capability to do so? If this is the case and INT8 is not possible, TRT engines would still provide improvements over simply using Ultralytics CLI, right?

Thank you!

Ultralytics supports exporting to TensorRT with INT8 natively. And it also supports running inference with the TensorRT engine natively.

Thank you. Would I have to go through ONNX format at all due to compatibility issues, or can I just export to TensorRT directly?

Also, I saw that there was a dla option for device, but I should just use device=0 (GPU) for the Nano since it doesn’t have any Deep Learning Accelerator cores, right?

Thank you very much!

Hi,

It seems that exporting directly to TensorRT results in an error because it requires the onnxruntime-gpu package, which doesn’t have support for ARM. Is there a workaround for this, or do I need to go through ONNX as an intermediary?

Thanks.

Hi,
Int8 is not supported on Jetson Nano, please refer to
Can Jetson nano import int8? - #5 by AastaLLL

We would suggest use DeepStream SDK on Jetson Nano. The supporter version is 6.0.1:

NVIDIA Metropolis Documentation
https://docs.nvidia.com/metropolis/deepstream/DeepStream_6.0.1_Release_Notes.pdf

Please check the documents to set up and try.

Thank you. Is it possible to quantize to INT8, just not infer using INT8, or are both not possible?

Also, sorry, but what is the purpose of DeepStream SDK in this case? And how do you tell if it is already installed?

Thank you.

Hi,
If using INT8 is a hard requirement in your use-case, we would suggest use later Jetson platforms, such as Xavier NX or Orin Nano.

To install DeepStream SDK on Jetson Nano, please refer to

Quickstart Guide — DeepStream 6.0.1 Release documentation

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.