You can download our dataset from here, where we release 30k joint trajectories of a mobile manipulator across 119 scenes.
The dataset is organized as following:
- robot_urdf/
- scene_urdf/
- <physcene_xxx>/
- pick/
- <physcene_xxx>/
- <object_linkname>/
- <timestamp>/
- <instance-id>/
- vkc_request.json
- pick_vkc_return.json
- config.json
- <trajectory>
- pick_vkc_caption_trajectory.json
- place/
- <physcene_xxx>/
- <object_linkname>/
- <timestamp>/
- <instance-id>/
- vkc_request.json
- place_vkc_return.json
- config.json
- <trajectory>
- pick_vkc_caption_trajectory.jsonwhere
robot_urdf/contains the URDF model of the robot.scene_urdf/<physcene_xxx>/contains the URDF asset of the scene, andphyscene_xxxis the ID of the scene.pick/<physcene_xxx>/<object_linkname>contains all thepicktasks related to object<object_linkname>in the scene<physcene_xxx>..../<instance-id>/vkc_request.jsonis the planning request configuration for the VKC motion planner to plan for solution..../<instance-id>/pick_vkc_return.jsonis the outputs of the VKC motion planner..../<instance-id>/config.jsonis the task configuration file which contains the link name of the target object, and the initial pose of the target object, and the initial position of the robot..../<instance-id>/trajectory/pick_vkc_caption_trajectory.jsoncontains the joint trajectory that accomplish this task instance, and the language description of the task.
Our evaluation program consists of two phases:
-
Evaluation in PyBullet: Focuses on assessing whether trajectories generated by different models adhere to physical constraints, including collision rate, joint violations, and trajectory smoothness.
-
Evaluation in Isaac Sim: Evaluates task success rates, such as whether the pick trajectory successfully picks up the object or whether the place trajectory stably places the object on the target plane.
Our dataset includes a complete set of 3D scene assets and joint trajectories for accomplishing tasks. We highly recommend developing your own evaluation program to better align with your specific objectives.
Run the following commands to set up the evaluation environment for PyBullet:
cd evaluation/
bash setup.sh
# Return to the repo root after installation
cd ../Update the source code of yourdfpy in yourdfpy/urdf.py (lines 1240–1244) as follows:
# Original code
# if not np.all(geometry.mesh.scale == geometry.mesh.scale[0]):
# _logger.warning(
# f"Warning: Can't scale axis independently, will use the first entry of '{geometry.mesh.scale}'"
# )
new_s = new_s.scaled([geometry.mesh.scale[0], geometry.mesh.scale[1], geometry.mesh.scale[2]])To evaluate trajectories in Isaac Sim, set up the wrapped Isaac environment (TongVerse) with:
cd tongverse/
bash setup.sh
# Return to the repo root after installation
cd ../- Ensure your generated trajectory follows the same structure as the output of M2Diffuser.
- Update the directory paths in
utils/path.py. TheVKC_DEPSvariable should point to theenvdirectory containing the robot and scene models:
env/
tongverse_agents/
agent/
Mec_kinova/
physcene/
physcene_10/
physcene_20/
...- Modify the trajectory directory path in
config/task/pick.yaml.
- Add trajectory loading code to
evaluate_traj.py. - Run the following command to evaluate the trajectory in PyBullet:
bash scripts/evaluate.shResults will be stored in <model-name>/results_dataset/.
- Activate the TongVerse conda environment (see
tongverse/setup.pyfor details). - Move
evaluate/tv_evaluateinto thetongverse/directory. - Run the following command to evaluate the trajectory in Isaac Sim:
python evaluate_pick.py --result_dir /your_ws_path/m3bench/results/pick/${timestamp} --dataset_test_dir /your_data_path/pick/testFinally, you can process the aggregated results with:
python eval_all_result_pick_dataset.py --result_dir ../../results_dataset/pick/${timestamp} --dataset_test_dir /your_data_path/pick/test