This is the repository that contains source code for the VLA-Adapter Project Page.
📌 All contents are now open source:
🖥️ Project page: https://vla-adapter.github.io/
📝 Paper: https://arxiv.org/abs/2509.09372
Github: https://github.com/OpenHelix-Team/VLA-Adapter
🤗 Model: https://huggingface.co/VLA-Adapter
If you find VLA-Adapter useful for your work, please cite:
@article{wang2025vlaadapter,
author={Wang, Yihao and Ding, Pengxiang and Li, Lingxiao and Cui, Can and Ge, Zirui and Tong, Xinyang and Song, Wenxuan and Zhao, Han and Zhao, Wei and Hou, Pengxu and Huang, Siteng and Tang, Yifan and Wang, Wenhui and Zhang, Ru and Liu, Jianyi and Wang, Donglin},
title={VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model},
journal={arXiv preprint arXiv:2509.09372},
year={2025}
}
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

