Skip to content

[ICCV 2025] TurboTrain: Towards Efficient and Balanced Multi-Task Learning for Multi-Agent Perception and Prediction.

Notifications You must be signed in to change notification settings

ucla-mobility/TurboTrain

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

TurboTrain: Towards Efficient and Balanced Multi-Task Learning for Multi-Agent Perception and Prediction

paper supplement

[ICCV 2025] This is the official implementation of "TurboTrain: Towards Efficient and Balanced Multi-Task Learning for Multi-Agent Perception and Prediction", Zewei Zhou*, Seth Z. Zhao*, Tianhui Cai, Zhiyu Huang, Bolei Zhou, Jiaqi Ma

teaser

TurboTrain is the first efficient and balanced multi-task learning paradigm, comprising task-agnostic self-supervised pretraining and multi-task balancing, which eliminates the need for manually designing and tuning complex multi-stage training pipelines, reducing training time, and improving performance.

News

Release Plan

  • 2025/08: ✅ TurboTrain paper
  • 2025/12: Full Codebase Release.

Acknowledgement

The codebase is built upon V2XPnP in the OpenCDA ecosystem family.

Citation

If you find this repository useful for your research, please consider giving us a star 🌟 and citing our paper.

@inproceedings{zhou2025turbotrain,
 title={TurboTrain: Towards efficient and balanced multi-task learning for multi-agent perception and prediction},
 author={Zhou, Zewei and Zhao, Seth Z and Cai, Tianhui and Huang, Zhiyu and Zhou, Bolei and Ma, Jiaqi},
 booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
 pages={4391--4402},
 year={2025}
}

Other useful citations:

@article{zhao2024coopre,
 title={CooPre: Cooperative Pretraining for V2X Cooperative Perception},
 author={Zhao, Seth Z and Xiang, Hao and Xu, Chenfeng and Xia, Xin and Zhou, Bolei and Ma, Jiaqi},
 journal={arXiv preprint arXiv:2408.11241},
 year={2024}
}

@inproceedings{zhou2025v2xpnp,
 title={V2xpnp: Vehicle-to-everything spatio-temporal fusion for multi-agent perception and prediction},
 author={Zhou, Zewei and Xiang, Hao and Zheng, Zhaoliang and Zhao, Seth Z and Lei, Mingyue and Zhang, Yun and Cai, Tianhui and Liu, Xinyi and Liu, Johnson and Bajji, Maheswari and others},
 booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
 pages={25399--25409},
 year={2025}
}

@article{xiang2024v2xreal,
 title={V2X-Real: a Largs-Scale Dataset for Vehicle-to-Everything Cooperative Perception},
 author={Xiang, Hao and Zheng, Zhaoliang and Xia, Xin and Xu, Runsheng and Gao, Letian and Zhou, Zewei and Han, Xu and Ji, Xinkai and Li, Mingxi and Meng, Zonglin and others},
 journal={arXiv preprint arXiv:2403.16034},
 year={2024}
}

About

[ICCV 2025] TurboTrain: Towards Efficient and Balanced Multi-Task Learning for Multi-Agent Perception and Prediction.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •