We release checkpoints on KITTI, nuScenes, and CalibDB! The CalibDB checkpoint is expected to perform better in indoor scenes. All checkpoints are available on Hugging Face and Google Drive.
First create a conda environment:
conda env create -n bevcalib python=3.11
conda activate bevcalib
pip3 install -r requirements.txtThe code is built with following libraries:
- Python = 3.11
- Pytorch = 2.6.0
- CUDA = 11.8
- cuda-toolkit = 11.8
- spconv-cu118
- OpenCV
- pandas
- open3d
- transformers
- deformable_attention
- tensorboard
- wandb
- pykitti
We recommend using the following command to install cuda-toolkit=11.8:
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkitAfter installing the above dependencies, please run the following command to install bev_pool operation
cd ./core/img_branch/bev_pool && python setup.py build_ext --inplaceWe also provide a Dockerfile for easy setup, please execute the following command to build the docker image and install cuda extensions:
docker build -f Dockerfile/Dockerfile -t bevcalib .
docker run --gpus all -it -v$(pwd):/workspace bevcalib
### In the docker, run the following command to install cuda extensions
cd ./core/img_branch/bev_pool && python setup.py build_ext --inplaceWe release the code to reproduce our results on the KITTI-Odometry dataset. Please download the KITTI-Odometry dataset from here. After downloading the dataset, the directory structure should look like
kitti-odometry/
├── sequences/
│ ├── 00/
│ │ ├── image_2/
│ │ ├── image_3/
│ │ ├── velodyne/
│ │ └── calib.txt
│ ├── 01/
│ │ ├── ...
│ └── 21/
│ └── ...
└── poses/
├── 00.txt
├── 01.txt
└── ...
Please download the nuScenes dataset from here. We use the nuscenes-devkit to load the dataset. Please install it via pip install nuscenes-devkit.
Coming soon!
We release pretrained models for KITTI-Odometry, nuScenes, and CalibDB. We provide two ways to download our models.
Please find the pretrained models from Google Drive and place them in the ./ckpts directory. You can also download them individually using gdown:
pip install gdown
# KITTI checkpoint
gdown "https://drive.google.com/uc?id=1gWO-Z4NXG2uWwsZPecjWByaZVtgJ0XNb" -O ckpts/kitti.pth
# nuScenes checkpoint
gdown "https://drive.google.com/uc?id=1TXRXDimvI3eG4l37zj0d9AuH3YqBl0En" -O ckpts/nuscenes.pth
# CalibDB checkpoint
gdown "https://drive.google.com/uc?id=1Oc9kmHR5XdG5k6HvZ88Y-QcvM9uvbslK" -O ckpts/calibdb.pthWe also release our pretrained models on Hugging Face. You should download huggingface-cli by pip install -U "huggingface_hub[cli]" and then download the pretrained models by running the following commands:
# KITTI checkpoint
huggingface-cli download cisl-hf/BEVCalib --revision kitti-bev-calib --local-dir YOUR_LOCAL_PATH
# nuScenes checkpoint
huggingface-cli download cisl-hf/BEVCalib --revision nuscenes-bev-calib --local-dir YOUR_LOCAL_PATH
# CalibDB checkpoint
huggingface-cli download cisl-hf/BEVCalib --revision calibdb-bev-calib --local-dir YOUR_LOCAL_PATHBefore running any scripts, set the PYTHONPATH to the repository root so that shared modules in core/ can be found:
export PYTHONPATH=$(pwd)Please run the following command to evaluate the model on KITTI:
python kitti-bev-calib/inference_kitti.py \
--log_dir ./logs/kitti \
--dataset_root YOUR_PATH_TO_KITTI/kitti-odemetry \
--ckpt_path YOUR_PATH_TO_KITTI_CHECKPOINT/ckpts/kitti.pth \
--angle_range_deg 20.0 \
--trans_range 1.5Please run the following command to evaluate the model on nuScenes:
python nuscenes-bev-calib/inference_nuscenes.py \
--log_dir ./logs/nuscenes \
--dataset_root YOUR_PATH_TO_NUSCENES \
--ckpt_path YOUR_PATH_TO_NUSCENES_CHECKPOINT/ckpts/nuscenes.pth \
--angle_range_deg 20.0 \
--trans_range 1.5We provide instructions to reproduce our results on the KITTI-Ododemetry dataset. Please run:
python kitti-bev-calib/train_kitti.py --log_dir ./logs/kitti \
--dataset_root YOUR_PATH_TO_KITTI/kitti-odemetry \
--save_ckpt_per_epoches 40 --num_epochs 500 --label 20_1.5 --angle_range_deg 20 --trans_range 1.5 \
--deformable 0 --bev_encoder 1 --batch_size 16 --xyz_only 1 --scheduler 1 --lr 1e-4 --step_size 80You can change --angle_range_deg and --trans_range to train under different noise settings. You can also try to use --pretrain_ckpt to load a pretrained model for fine-tuning on your own dataset.
BEVCalib appreciates the following great open-source projects: BEVFusion, LCCNet, LSS, spconv, and Deformable Attention.
@inproceedings{bevcalib,
title={BEVCALIB: LiDAR-Camera Calibration via Geometry-Guided Bird's-Eye View Representations},
author={Weiduo Yuan and Jerry Li and Justin Yue and Divyank Shah and Konstantinos Karydis and Hang Qiu},
booktitle={9th Annual Conference on Robot Learning},
year={2025},
}