Running Inference Program on Jetson AGX Orin

Hi, I’m currently following the tutorials for the Jetson-Inference github page. I have re-installed the Jetpack version on my Jetson AGX Orin developer kit, so I am running with Jetpack 6.0 so I have a previous version of TensorRT. I have been able to successfully run the docker container, and trained a model with my own custom dataset.

My question is now how I can use this model for my intended purpose. I wont get into too many specifics, but I want to be sending my model an image to perform inference on every 5 seconds or so. The problem I currently see when I run inference (using the imagenet command) the model takes some time to load into memory (I believe this is what is happening). I believe this because when I run inference on a folder, the first image always takes a longer time compared to the following images. I want to avoid this overhead when running my inference as this is time critical.

First I need to ask if this sounds possible: Load my model into memory, have it “wait” to receive image inputs, receive an image and perform inference, go back to waiting for an image without unloading the model.

  1. Is this possible? I am assuming it is

  2. Are there any specific resources you can point me to for this? I am

Thank you for taking the time to read this. I’ll probably be asking many questions on this forum as I am still new to this development.

Andrew

Hi,

Yes, it’s possible.

For example, you can load the with net = imageNet(args.network, sys.argv) first.
Then, once you need to process an image, call predictions = net.Classify(img, topK=args.topK) to get the detection output.

Please find the whole example in the link below:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.