- Introduction
- Setup
- Data Download
- Data Preprocessing
- Train VAE
- Pre-save Features
- Train SB Models
- Usage
- Notes
This repository contains official implementation of Grasp2Grasp: Vision-Based Dexterous Grasp Translation via Schrödinger Bridges.
conda env create -f environment.yml
conda activate grasp2grasp
If you encounter any issue, you might have to build pytorch3d==0.7.2 and xformers==0.0.21 from source.
Download the MultiGripperGrasp dataset. Place and extract the dataset under ./data/.
⚠️ IMPORTANT: Please reserve at least 1 TB of disk space. Due to the size of the dataset, the preprocessing takes ~2 days to finish on a 48-core CPU.
Preprocess the dataset and generate point cloud observations using:
cd dataset/preproc && \
python mgg_parse_objects.py && \
python mgg_to_pc_parallel.py && \
python mgg_to_pc_parallel_human.py && \
python mgg_to_pc_parallel_shadow.py && \
python process_contact.py
Train the VAE using:
python train_ae.py --config /path/to/vae_config
We provide example config files under ./config/mgg. You can also download and place the trained checkpoints under ./logs.
Precompute and save object point clouds features:
cd dataset/scripts && \
python save_mgg_pc_latent.py
You can download the pretrained LION checkpoints here and place under ./logs.
Precompute and save hand point clouds features:
cd dataset/scripts && \
python preprocess_latent.py --config /path/to/vae_config
where /path/to/vae_config is the config file of the corresponding VAE.
(Optional) Precompute and save grasp GWH:
cd dataset/scripts && \
python compute_gwh.py
(Optional) We provided the precomputed Jacobian of each grasp here.
Train the SB model using:
python train_fm_ddp.py --config /path/to/sb_config
We provide an example config file sbfm_human_allegro.json under ./config/mgg. Due to cloud drive size limit, we release the pretrained checkpoints of the H->A and H->S settings trained with the GWH silimarity metric here.
After training, you can sample via:
python sample.py --config /path/to/sb_config
Report the results by:
python eval_samples.py
Install extra dependencies:
pip install urdf-parser-py plotly transformations transforms3d
Install Isaac Gym and run:
cd grasp_test && python isaac_test_right.py --robot_name <ROBOT_NAME> --eval_dir <PATH_TO_SAMPLE_FOLDER>
If you find this codebase useful in your research, consider citing:
@inproceedings{
zhong2025grasp2grasp,
title={Grasp2Grasp: Vision-Based Dexterous Grasp Translation via Schr\"odinger Bridges},
author={Tao Zhong and Jonah Buchanan and Christine Allen-Blanchette},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems (NeurIPS)},
year={2025}
}The following repositories are used in this repository, either in close to original form or as an inspiration:
Unless otherwise noted in the submodules, the rest of this repo is licensed under the MIT License. See LICENSE for more details.