Official implementation of
āAirV2X: Unified AirāGround/VehicleātoāEverything Collaboration for Perceptionā
Download AirV2XāPerception from Hugging Face and extract it to any location:
mkdir dataset
cd dataset # Use another directory to avoid naming conflict
conda install -c conda-forge git-lfs
git lfs install --skip-smudge
git clone https://huggingface.co/datasets/xiangbog/AirV2X-Perception
cd AirV2X-Perception
git lfs pull
# git lfs pull --include "path/to/folder" # If you would like to download only partial of the datasetWe also provide a mini batch for quick testing and debugging.
Detailed instructions and environment specifications are in doc/INSTALL.md.
python opencood/tools/train.py \
-y /path/to/config_file.yamlExample: train Where2Comm (LiDARāonly)
python opencood/tools/train.py \
-y opencood/hypes_yaml/airv2x/lidar/det/airv2x_intermediate_where2com.yamlTip
Some models such as V2XāViT and CoBEVT consume a large amount of VRAM.
Enable mixedāprecision with--ampif you encounter OOM, but watch out for NaN/Inf instability.
python opencood/tools/train.py \
-y opencood/hypes_yaml/airv2x/lidar/det/airv2x_intermediate_v2xvit.yaml
--ampCUDA_VISIBLE_DEVICES=0,1,2,3 torchrun \
--standalone --nproc_per_node=4 \
opencood/tools/train.py \
-y /path/to/config_file.yamlExample: LiDARāonly Where2Comm with 8 GPUs
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun \
--standalone\
--nproc_per_node=8 \
opencood/tools/train.py \
-y opencood/hypes_yaml/airv2x/lidar/det/airv2x_intermediate_where2com.yamlThese models were trained on 2 nodes Ć 1 GPU (batch sizeĀ 1).
If you change the number of GPUs or batch size, adjust the learning rate accordingly.
python opencood/tools/inference_multi_scenario.py \
--model_dir opencood/logs/airv2x_intermediate_where2comm/default__2025_07_10_09_17_28 \
--eval_best_epoch \
--save_vistensorboard --logdir opencood/logs --port 10000 --bind_all@article{gao2025airv2x,
title = {AirV2X: Unified Air--Ground/Vehicle-to-Everything Collaboration for Perception},
author = {Gao, Xiangbo and Tu, Zhengzhong and others},
journal = {arXiv preprint arXiv:2506.19283},
year = {2025}
}We will continuously update this repository with code, checkpoints, and documentation.
Feel free to open issues or pull requests ā contributions are welcome! š