Skip to content

n3il666/grasp2grasp

Repository files navigation

Grasp2Grasp: Vision-Based Dexterous Grasp Translation via Schrödinger Bridges (NeurIPS 2025)

arXiv / Project Page

Table of Contents

Introduction

This repository contains official implementation of Grasp2Grasp: Vision-Based Dexterous Grasp Translation via Schrödinger Bridges.

Setup

conda env create -f environment.yml
conda activate grasp2grasp

If you encounter any issue, you might have to build pytorch3d==0.7.2 and xformers==0.0.21 from source.

Data Download

Download the MultiGripperGrasp dataset. Place and extract the dataset under ./data/.

Data Preprocessing

⚠️ IMPORTANT: Please reserve at least 1 TB of disk space. Due to the size of the dataset, the preprocessing takes ~2 days to finish on a 48-core CPU.

Preprocess the dataset and generate point cloud observations using:

cd dataset/preproc && \
python mgg_parse_objects.py && \
python mgg_to_pc_parallel.py && \
python mgg_to_pc_parallel_human.py && \
python mgg_to_pc_parallel_shadow.py && \
python process_contact.py

Train VAE

Train the VAE using:

python train_ae.py --config /path/to/vae_config

We provide example config files under ./config/mgg. You can also download and place the trained checkpoints under ./logs.

Pre-save Features

Precompute and save object point clouds features:

cd dataset/scripts && \
python save_mgg_pc_latent.py

You can download the pretrained LION checkpoints here and place under ./logs.

Precompute and save hand point clouds features:

cd dataset/scripts && \
python preprocess_latent.py --config /path/to/vae_config

where /path/to/vae_config is the config file of the corresponding VAE.

(Optional) Precompute and save grasp GWH:

cd dataset/scripts && \
python compute_gwh.py

(Optional) We provided the precomputed Jacobian of each grasp here.

Train SB Models

Train the SB model using:

python train_fm_ddp.py --config /path/to/sb_config

We provide an example config file sbfm_human_allegro.json under ./config/mgg. Due to cloud drive size limit, we release the pretrained checkpoints of the H->A and H->S settings trained with the GWH silimarity metric here.

Evaluation

After training, you can sample via:

python sample.py --config /path/to/sb_config

Report the results by:

python eval_samples.py

(Optional) Isaac Gym Simulation

Install extra dependencies:

pip install urdf-parser-py plotly transformations transforms3d

Install Isaac Gym and run:

cd grasp_test && python isaac_test_right.py --robot_name <ROBOT_NAME> --eval_dir <PATH_TO_SAMPLE_FOLDER>

Citation

If you find this codebase useful in your research, consider citing:

@inproceedings{
    zhong2025grasp2grasp,
    title={Grasp2Grasp: Vision-Based Dexterous Grasp Translation via Schr\"odinger Bridges},
    author={Tao Zhong and Jonah Buchanan and Christine Allen-Blanchette},
    booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems (NeurIPS)},
    year={2025}
}

Credits

The following repositories are used in this repository, either in close to original form or as an inspiration:

License

Unless otherwise noted in the submodules, the rest of this repo is licensed under the MIT License. See LICENSE for more details.

About

Grasp2Grasp: Vision-Based Dexterous Grasp Translation via Schrödinger Bridges (NeurIPS 2025)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors