Is it possible to do INT8 calibration via TensorRT engines to run YOLO models on the Jetson Nano 2 GB, or does the GPU not have the capability to do so? If this is the case and INT8 is not possible, TRT engines would still provide improvements over simply using Ultralytics CLI, right?
Thank you. Would I have to go through ONNX format at all due to compatibility issues, or can I just export to TensorRT directly?
Also, I saw that there was a dla option for device, but I should just use device=0 (GPU) for the Nano since it doesn’t have any Deep Learning Accelerator cores, right?
It seems that exporting directly to TensorRT results in an error because it requires the onnxruntime-gpu package, which doesn’t have support for ARM. Is there a workaround for this, or do I need to go through ONNX as an intermediary?