Skip to content

This is the official repo for paper "M3Bench: Benchmarking Whole-Body Motion Generation for Mobile Manipulation in 3D Scenes"

Notifications You must be signed in to change notification settings

TooSchoolForCool/M3Bench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

M3Bench: Benchmarking Whole-Body Motion Generation for Mobile Manipulation in 3D Scenes

IEEE Robotics and Automation Letters (RA-L), 2025

Paper PDF Project Page YouTube Video

1. Dataset Description

You can download our dataset from here, where we release 30k joint trajectories of a mobile manipulator across 119 scenes.

The dataset is organized as following:

- robot_urdf/
- scene_urdf/
	- <physcene_xxx>/
- pick/
	- <physcene_xxx>/
		- <object_linkname>/
			- <timestamp>/
				- <instance-id>/
					- vkc_request.json
					- pick_vkc_return.json
					- config.json
					- <trajectory>
						- pick_vkc_caption_trajectory.json
- place/
	- <physcene_xxx>/
		- <object_linkname>/
			- <timestamp>/
				- <instance-id>/
					- vkc_request.json
					- place_vkc_return.json
					- config.json
					- <trajectory>
						- pick_vkc_caption_trajectory.json

where

  • robot_urdf/ contains the URDF model of the robot.
  • scene_urdf/<physcene_xxx>/ contains the URDF asset of the scene, and physcene_xxx is the ID of the scene.
  • pick/<physcene_xxx>/<object_linkname> contains all the pick tasks related to object <object_linkname> in the scene <physcene_xxx>.
  • .../<instance-id>/vkc_request.json is the planning request configuration for the VKC motion planner to plan for solution.
  • .../<instance-id>/pick_vkc_return.json is the outputs of the VKC motion planner.
  • .../<instance-id>/config.json is the task configuration file which contains the link name of the target object, and the initial pose of the target object, and the initial position of the robot.
  • .../<instance-id>/trajectory/pick_vkc_caption_trajectory.json contains the joint trajectory that accomplish this task instance, and the language description of the task.

2. Evaluation Protocol

Our evaluation program consists of two phases:

  1. Evaluation in PyBullet: Focuses on assessing whether trajectories generated by different models adhere to physical constraints, including collision rate, joint violations, and trajectory smoothness.

  2. Evaluation in Isaac Sim: Evaluates task success rates, such as whether the pick trajectory successfully picks up the object or whether the place trajectory stably places the object on the target plane.

Our dataset includes a complete set of 3D scene assets and joint trajectories for accomplishing tasks. We highly recommend developing your own evaluation program to better align with your specific objectives.

2.1 Installation & Setup

PyBullet Evaluation Environment

Run the following commands to set up the evaluation environment for PyBullet:

cd evaluation/
bash setup.sh
# Return to the repo root after installation
cd ../

Modifying yourdfpy Source Code

Update the source code of yourdfpy in yourdfpy/urdf.py (lines 1240–1244) as follows:

# Original code
# if not np.all(geometry.mesh.scale == geometry.mesh.scale[0]):
#     _logger.warning(
#         f"Warning: Can't scale axis independently, will use the first entry of '{geometry.mesh.scale}'"
#     )
new_s = new_s.scaled([geometry.mesh.scale[0], geometry.mesh.scale[1], geometry.mesh.scale[2]])

Isaac Sim Evaluation Environment

To evaluate trajectories in Isaac Sim, set up the wrapped Isaac environment (TongVerse) with:

cd tongverse/
bash setup.sh
# Return to the repo root after installation
cd ../

2.2 Evaluate Your Results

Preparing for Evaluation

  1. Ensure your generated trajectory follows the same structure as the output of M2Diffuser.
  2. Update the directory paths in utils/path.py. The VKC_DEPS variable should point to the env directory containing the robot and scene models:
env/
    tongverse_agents/
        agent/
            Mec_kinova/
    physcene/
        physcene_10/
        physcene_20/
        ...
  1. Modify the trajectory directory path in config/task/pick.yaml.

Evaluation in PyBullet

  1. Add trajectory loading code to evaluate_traj.py.
  2. Run the following command to evaluate the trajectory in PyBullet:
bash scripts/evaluate.sh

Results will be stored in <model-name>/results_dataset/.

Evaluation in Isaac Sim

  1. Activate the TongVerse conda environment (see tongverse/setup.py for details).
  2. Move evaluate/tv_evaluate into the tongverse/ directory.
  3. Run the following command to evaluate the trajectory in Isaac Sim:
python evaluate_pick.py --result_dir /your_ws_path/m3bench/results/pick/${timestamp} --dataset_test_dir /your_data_path/pick/test

Finally, you can process the aggregated results with:

python eval_all_result_pick_dataset.py --result_dir ../../results_dataset/pick/${timestamp} --dataset_test_dir /your_data_path/pick/test

About

This is the official repo for paper "M3Bench: Benchmarking Whole-Body Motion Generation for Mobile Manipulation in 3D Scenes"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published