ArticuBot: Learning Universal Articulated Object Manipulation Policy via Large Scale Simulation, RSS 2025
Yufei Wang*, Ziyu Wang*, Mino Nakura†, Pratik Bhowal†, Chia-Liang Kuo†, Yi-Ting Chen, Zackory Erickson‡, David Held‡
(*† equal contribution, ‡ equal advising)
ArticuBot learns a universal policy for manipulating diverse articulated objects. It first generates a large amount of data in simulation, and then distills them into a visual policy via hierarchical imitation learning. Finally, the learned policy can be zero-shot transferred to the real world.
Clone this git repo. We recommend working with a conda environment.
- First create the articubot conda environment:
conda env create -f environment.yaml
conda activate articubot
- Then install some additional dependencies for 3d diffusion policy (which our low-level goal conditioned policy builds on):
cd 3d_diffusion_policy/3D-Diffusion-Policy/3D-Diffusion-Policy
pip install -e .
pip install zarr==2.12.0 wandb ipdb gpustat dm_control omegaconf hydra-core==1.2.0 dill==0.3.5.1 einops==0.4.1 diffusers==0.11.1 numba==0.56.4 moviepy imageio av matplotlib termcolor
- Install pybullet:
cd bullet3
pip install -e .
- Install fpsample for fasthest point sampling:
pip install fpsample
The installtaion might give an error which requires you to install Rust first. Following the prompt in the installtaion error message should be sufficient. If the installation runs into any other issue, check https://github.com/leonardodalinky/fpsample.
- (Optional) In addition to pybullet's default IK solver, we use tracIK as an additional ik solver which might be more accurate and give better IK solutions. If you wish to install tracIK, follow instructions here: https://github.com/mjd3/tracikpy.
The above should be sufficient for training and evaluating articubot policies.
All datasets we use can be downloaded from this google drive link.
- We use PartNet-Mobility as our simulation assets. We provide a parsed version in the above google drive link, named
dataset.zip. After downloading, unzip it todata/dataset. - The high-level policy training dataset is named
dp3_demo.zip. Unzip it todata/dp3_demo. - The low-level policy training dataset is named
dp3_demo_combined_2_step_0.zip. Unzip it todata/dp3_demo_combined_2_step_0. - Some files will also be needed for running evaluation (the simulator init states). They are named
diverse_objects.zip. Unzip it todata/diverse_objects.
You can run python scripts/visualization_scripts/check_data.py to visualize the stored datasets.
Assume you have downloaded the training dataset following the above instructions.
For training the high-level policy:
source prepare.sh
bash scripts/weighted-displacement-high-level/train-weighted-displacement.sh
Change num_train_objects in this script for training with different number of objects (default value: 50 objects).
The training logs for the high-level policy will be at weighted_displacement_model/exps/.
You can find an example training log for the high-level policy (with 50 objects) in this wandb space: https://wandb.ai/marswang/articubot-high-level-weighted-displacement?nw=nwusermarswang
For training the low-levle policy:
source prepare.sh
bash scripts/low-level/train_unet_diffusion_low_level.sh
Change num_train_objects in the script for training with different number of objects (default and recommended value: 50 objects).
The training logs for the low-level policy will be at 3d_diffusion_policy/3D-Diffusion-Policy/3D-Diffusion-Policy/data/
You can find an example training log for the low-level policy in this wandb space: https://wandb.ai/marswang/articubot_dp3_low_level?nw=nwusermarswang
Assume you have downloaded the needed evaluation files following the above instructions.
We provide pretrained high-level and low-level ckpts through this google drive link:
The low-level policy ckpt is named low-level-ckpt.zip. Unzip it to data/low-level-ckpt.
The high-level policy ckpt is named high_level_200_obj_ckpt.pth. Download and put it to data/high_level_200_obj_ckpt.pth.
To evaluate a trained high and low-level policy:
source prepare.sh
bash scripts/eval.sh
The evaluation results (including gifs of rollouts and normalized opening performances of each rollout) will be saved at 3d_diffusion_policy/3D-Diffusion-Policy/3D-Diffusion-Policy/data/.
To print the quantitive numbers, you can run python scripts/print_eval_results.py --d your_eval_run_results_dir
By using the provided pretrained policy ckpts, you are expected to get a performance around 0.68 (averaged over 10 test objects, each with 25 test trials. Each trial has different object pose/robot initial joint angle).
We provide the data generation code in a different branch: gen_demo.
Do
git fetch origin
git checkout -b gen_demo origin/gen_demo
And then follow the instructions in the readme of the gen_demo branch for demonstration generation.
We thank the authors of the following repositories for open-sourcing their code, which our codebase is built upon:
- DP3: https://github.com/YanjieZe/3D-Diffusion-Policy
- RoboGen: https://github.com/Genesis-Embodied-AI/RoboGen
- Act3D: https://github.com/zhouxian/act3d-chained-diffuser
- PointNet++: https://github.com/yanx27/Pointnet_Pointnet2_pytorch
- OMPL: https://ompl.kavrakilab.org/
- Bullet: https://github.com/bulletphysics/bullet3
If you find this codebase useful in your research, please consider citing:
@inproceedings{Wang2025articubot,
title={ArticuBot: Learning Universal Articulated Object Manipulation Policy via Large Scale Simulation},
author={Wang, Yufei and Wang, Ziyu and Nakura, Mino and Bhowal, Pratik and Kuo, Chia-Liang and Chen, Yi-Ting and Erickson, Zackory and Held, David},
booktitle={Robotics: Science and Systems (RSS)},
year={2025}
}
