The repository contains the code for the REED method presented in the paper: Learning Diffusion Models with Flexible Representation Guidance (NeurIPS 2025). Check our project page here!
- [2025/10/12] We release the project page.
- [2025/09/18] REED is accepted to NeurIPS 2025!
- [2025/07/12] Code is released!
- [2025/07/12] Paper is available on arXiv!
REED presents a comprehensive framework for representation-enhanced diffusion model training, combining theoretical analysis, multimodal representation alignment strategies, an effective training curriculum, and practical domain-specific instantiations (image, protein sequence, and molecule).
For the class-conditional ImageNet REED achieves a image/.
For protein inverse folding, REED accelerates training by protein/.
For molecule generation, REED improves metrics such as atom and molecule stability, validity, energy, and strain on the challenging Geom-Drug datasets. The detailed code and instructions are in molecule/.
If you find this work useful in your research, please cite:
@article{wang2025learning,
title={Learning Diffusion Models with Flexible Representation Guidance},
author={Chenyu Wang and Cai Zhou and Sharut Gupta and Zongyu Lin and Stefanie Jegelka and Stephen Bates and Tommi Jaakkola},
journal={arXiv preprint arXiv:2507.08980},
year={2025}
}
