Skip to content

GaavaMa/Causal-Diffusion-Policy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CDP: Towards Robust Autoregressive Visuomotor Policy Learning via Causal Diffusion

Project Page | arXiv

CDP

Causal Diffusion Policy: a transformer-based diffusion model that enhances action prediction by conditioning on historical action sequences. (A): When performing the task of “grabbing the barrier” in practice, (B): the quality of observations is degraded by factors such as sensor noise, occlusions, and hardware limitations. In fact, this degraded but high-dimensional observation data not only fails to provide sufficient spatial constraint information for policy planning but also slows down the planning speed. (C): In this case, the robot is unable to perform accurate manipulation. (D): In this paper, we address historical action sequences to introduce temporally rich context as a supplement, which enables more robust policy generation.

🛠️ Getting Started

Installation

Please following INSTALL.md to set up the cdp conda environments along with their required dependencies.

Data

Please refer to the 3D Diffusion Policy repository to generate demonstrations.

Training

  1. To generate the demonstrations, run the appropriate gen_demonstration_xxxxx.sh script—check each script for specifics. For example:

    bash scripts/gen_demonstration_adroit.sh hammer

    This command collects demonstrations for the Adroit hammer task and automatically stores them in Causal-Diffusion-Policy/data/ folder.

  2. To train and evaluate a policy, run the following command:

    bash scripts/train_policy.sh dp2 adroit_hammer 0801 0 0
    bash scripts/train_policy.sh cdp2 adroit_hammer 0801 0 0
    bash scripts/train_policy.sh dp3 adroit_hammer 0801 0 0
    bash scripts/train_policy.sh cdp3 adroit_hammer 0801 0 0

    These commands train a DP2, CDP2, DP3, CDP3 policy on the Adroit hammer task, respectively.

😺 Acknowledgement

Our code is generally built upon 3D Diffusion Policy. We thank the authors for their nicely open sourced code and their great contributions to the community. For any questions or research collaboration opportunities, please don't hesitate to reach out: [email protected]

📝 Citation

If you find our work useful, please consider citing:

@article{ma2025cdp,
  title={CDP: Towards Robust Autoregressive Visuomotor Policy Learning via Causal Diffusion},
  author={Ma, Jiahua and Qin, Yiran and Li, Yixiong and Liao, Xuanqi and Guo, Yulan and Zhang, Ruimao},
  journal={arXiv preprint arXiv:2506.14769},
  year={2025}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors