Sha Yi*, Xueqian Bai*, Adabhav Singh, Jianglong Ye, Michael T. Tolley, Xiaolong Wang
Conference on Robot Learning (CoRL), 2025
- Python 3.10
- GPU optional but recommended (tested on RTX 3090, 4090)
- Key Python deps are listed in
requirements.txt:- warp-lang==1.6.1
- torch>=2.0
- numpy, scipy, trimesh, usd-core, tensorboard (optional)
conda create -n codesign python=3.10
conda activate codesign
pip install -r requirements.txt
# optional: verify Warp
python -c "import warp as wp; print(wp.__version__)" # expect 1.6.1We include the mesh for 006_mustard_bottle in the YCB dataset so you can try the simulation immediately. For example:
# simple run with rendering
python code/sim_gen.py --render --object_name 006_mustard_bottle --pose_id 5
# use a trained predictor to automatically pick the best pose
python code/sim_gen.py --render \
--pick_best_pose \
--model_name all_layer5_drop0.3_hidden1024_pointout128_batch512_lr0.001_weight0.001_seed42_l1_modelResults are written as USD files to output/. You can view them with an Omniverse-compatible USD viewer (e.g., usdview). The default stiffness distribution is already the optimized values from our trained model.
The hardware models, setup, assembly instructions, and motor control scripts are in hardware/. See the README for details.
- YCB meshes are expected under
models/ycb/<object_name>/google_16k/withnontextured.ply. This repo includes the mustard bottle for convenience. You may download the entire YCB dataset here. - Pose priors from AnyGrasp are stored under
pose_info/(see that folder’s README for details). Initial finger transforms are cached inpose_info/init_opt/to skip re-initialization. If you wish to process new objects, we recommend using our updated AnyGrasp setup here.
code/sim_datagen.py generates simulation rollouts and logs:
python code/sim_datagen.py \
--object_name 006_mustard_bottle \
--use_graph \
--random \
--pose_iters 20 \
--save_logYou may add other parameters as needed. Note that GPU memory access will gets tricky when the CUDA graph is used, but it speeds up the simulation significantly once the graph is correctly created.
Before training, particle point clouds of the generated data and the corresponding finger poses are needed. You can use the helper script in code/generate_partial_pointcloud.py to generate them.
Use code/train.py to train the predictor. Example (see --help for full options):
python code/train.py --object_name allEvaluation helpers are in code/train_eval.py.
If you wish to add more objects to the data generation and training pipeline,code/init_pose.py optimizes for feasible starting finger transforms per object pose based on the Anygrasp initial guess, and caches them in pose_info/init_opt/. It is also used to augment existing poses by adding random perturbations --random.
python code/init_pose.py --object_name 006_mustard_bottle --train_iters 15000code/: simulator, training, and utilitiespose_info/: AnyGrasp wrist/pose priors and cached initial finger transformshardware/: models, build notes and motor controlhardware/motor_control/: Dynamixel motor control codemodels/: YCB meshes and other modelssim_models/: trained neural physics models
@inproceedings{yi2025codesign,
title = {Co-Design of Soft Gripper with Neural Physics},
author = {Yi, Sha and Bai, Xueqian and Singh, Adabhav and Ye, Jianglong and Tolley, Michael T and Wang, Xiaolong},
booktitle = {Conference on Robot Learning (CoRL)},
year = {2025}
}See LICENSE for details. THIS SOFTWARE AND/OR DATA WAS DEPOSITED IN THE BAIR OPEN RESEARCH COMMONS REPOSITORY ON Sep 1 2025.
