Using YOLO and Custom Models with Isaac Sim

Hello, I have completed the object-detection tutorials and I would like to understand the correct workflow for using YOLO with Isaac Sim. I’m trying to figure out how to launch the necessary Isaac ROS nodes and how to make them work with a camera inside Isaac Sim.

I would also like to know how to use a custom YOLO model. I noticed that isaac_ros_yolov8_visualizer.py already contains predefined class_id mappings, and I’m not sure how to adapt the pipeline if my model detects different classes. I don’t know exactly what information is included inside the .onnx file and whether simply replacing the model is enough, or if additional manual modifications are required.

Any guidance or clarification would be greatly appreciated.

Your question is more related to Isaac ROS. Let me move it to Isaac ROS channel for better support.

I will also reach out to the internal team about your question.

At the same time, I found some resources online that might be helpful

https://medium.com/@kabilankb2003/leveraging-nvidia-isaac-sim-with-yolov8-advanced-object-detection-and-segmentation-in-warehouse-b162e86e3478

1 Like

We don’t have a specific Isaac Sim with Yolov8 tutorial. But you can set up the Isaac Sim pipeline to publish images of the desired scene to the required topic. You can refer to the isaac_ros_rtdetr with Isaac Sim tutorial.

The visualizer tool we provided does indeed have the specific classes for the specific model we ship, but that’s simply a convenience feature to annotate the bounding boxes with the class label. You can update the class names in that script, write your own separate visualization script, or simply use ours and ignore the incorrect labels. Visualization is best considered a debugging tool, not something you should include in their production pipeline

1 Like