Industrial enterprises are embracing physical AI and autonomous systems to transform their operations. This involves deploying heterogeneous robot fleets that include mobile robots, humanoid assistants, intelligent cameras, and AI agents throughout factories and warehouses.
To harness the full potential of these physical AI enabled systems, companies rely on digital twins of their facilities—virtual environments for simulating and optimizing how autonomous systems interact and perform complex tasks. This simulation-first approach enables enterprises to validate that their robot fleets can coordinate and adapt in dynamic environments before physical deployment, accelerating the transition to truly autonomous industrial operations.
Robot fleet simulation in industrial digital twins
The Mega NVIDIA Omniverse Blueprint enables enterprises to accelerate the development and deployment of physical AI in factories, warehouses, and industrial facilities.
The workflow enables developers to take advantage of sensor simulation and synthetic data generation to simulate complex autonomous operations and verify the performance of physical AI systems in industrial digital twins before real-world deployment.
This post explains the components of the blueprint so you can get started developing your own robot fleet simulation and validation pipeline.

Facility or fleet management systems
Fleet management systems are responsible for helping enterprises manage, coordinate, and optimize robot fleets for specific objectives or tasks like transporting goods, sorting items, and managing inventory. To enhance decision-making capabilities, these systems can be integrated into the workflow. By connecting these systems to an enterprise data lake, the fleet manager can access and use a wealth of data to improve coordination and optimize fleet missions.

Within the blueprint, the integrated fleet manager software interfaces with various robot brains using the industry-standard VDA5050 interface. This interface is an open standard for communication between the robot fleet and a central master controller and ensures that the fleet manager can effectively communicate with and control the robots.
Robot brains and robot policies
Robot brains can be as simple or as complex as needed for real-world robot operations. The robot brain or policy is the decision-making system and decides what the robot should do based on the input it receives, and defines the behavior through input data and output actions. The sensors collect the data through various forms, such as cameras or LiDAR, and then the robot processes this data based on its learned policies. Once processed, the policy outputs an action, and the actions are sent to the actuators. These robot brains are integrated into the blueprint as containers, exposing the same set of actuation and sensor interfaces as they would in real robots.

The actuation interface enables the robot brains to send actuation or control signals to the robot bodies. The blueprint provides a reference implementation of the actuation interface that translates actuation commands into ROS2 topics to the correct Universal Scene Description (OpenUSD) schema. This ensures that the control signals are accurately transmitted to the robot bodies in the virtual environment.
The actuation interface is crucial for controlling robot movements and actions. Developers can modify or replace this interface with any other translation needed for their specific robot brains, ensuring that the control signals are tailored to the unique requirements of each robot.
The sensor interface enables the robot brain to receive data from its sensors. The blueprint provides developers with a reference implementation of the sensor interface that translates sensor data received through gRPC streams from Sensor RTX to ROS2 topics consumed by robot brains. This ensures that the robot brains receive accurate and timely sensor data, enabling them to make informed decisions and perform their tasks effectively.
Developers can also integrate advanced visual AI agents, built with NVIDIA Metropolis and the NVIDIA AI blueprint for video search and summarization, into the workflow to bring richer insights and better decisions to industrial operations.
World Simulator
The World Simulator, developed with NVIDIA Omniverse and represented as an OpenUSD Stage, operates as the simulation runtime and is responsible for maintaining the state of the simulation and ensuring that all components are synchronized and accurate.

Sensor RTX
NVIDIA Omniverse Cloud Sensor RTX APIs enable developers to accurately simulate what robots encounter in the real world. Using these APIs, you can render the output of camera, radar, and LiDAR sensors. When combined with OpenUSD data generated by the World Simulator, this delivers a comprehensive, physically accurate digital twin of the industrial facility. The APIs are built on top of NVIDIA Cloud Functions, a cloud framework from NVIDIA that can host scalable functions.

Scheduler
Last but not least is the Scheduler, which manages time and overall execution. The scheduler is responsible for modeling latencies, managing multi-rate execution, and respecting data dependencies when executing complex producer-consumer graphs. This ensures that the simulation runs smoothly and accurately, providing a reliable environment for testing and validation.

Get started
By leveraging this powerful reference workflow, you can test complex scenarios, optimize fleet testing costs, accelerate commissioning, and deliver more efficient and effective real-world operations. To see the blueprint in action, visit the interactive demo on build.nvidia.com.
- Visit the Omniverse developer page to get all the essentials you need to get started.
- Access a collection of OpenUSD resources, including the new self-paced Learn OpenUSD training curriculum.
- Connect with the Omniverse Developer Community.
Get started with Developer Starter Kits to quickly develop and enhance your own applications and services.
Stay up to date by subscribing to NVIDIA news, and following NVIDIA Omniverse on Discord, YouTube, and Medium.