- Evaluation code is coming soon.
- [2026/03] RadarGen's inference and training pipelines for MAN TruckScenes and nuScenes are now available.
- [2025/12] Paper is on Arxiv!
- Python >= 3.10
- CUDA 12.1
- Conda
git clone --recursive https://github.com/tomerborreda/RadarGen.git
cd RadarGenNote: The
--recursiveflag is required to fetch the UniDepth and UFM submodules. If you already cloned without it, run:git submodule update --init --recursive
bash environment_setup.sh radargen
conda activate radargenThis will install RadarGen and all dependencies including:
- PyTorch 2.4.0 with CUDA 12.1
- UniDepth
- UFM
For manual installation, follow the steps in environment_setup.sh.
Note: A GPU is required for the installation of UniDepth.
We support multiple autonomous driving datasets. To get started, download and set up one of the following:
TruckScenes:
- Download: man.eu/truckscenes
- Install devkit:
pip install truckscenes-devkit
nuScenes:
- Download: nuscenes.org
- Install devkit:
pip install nuscenes-devkit
After downloading, update the configs in configs/*.yaml and configs/preprocessing/*.yaml to point to your dataset location.
Run inference on TruckScenes using the provided notebook: notebooks/inference_truckscenes.ipynb
Or use the Python API:
from radargen.inference import RadarGenInference
from radargen.datasets import get_adapter
# Load dataset adapter
adapter = get_adapter("truckscenes", trucksc=trucksc_obj)
# Initialize model
model = RadarGenInference(
adapter=adapter,
config_path="configs/RadarGen_600M_512px_TS_inference.yaml",
checkpoint_path="path/to/checkpoint"
)
# Generate point cloud from two consecutive frames
pcl = model.from_sample_data(sample_t0, sample_t1)Download pretrained models:
Pretrained model weights are available through huggingface.
- MAN TruckScenes:
- Link: https://huggingface.co/TomerBo/RadarGen_600M_512px_TS/
- Path:
hf://TomerBo/RadarGen_600M_512px_TS/RadarGen_600M_512px_TS.safetensors
- nuScenes: Coming soon
Before training, you need to create BEV conditioning maps and ground truth radar maps:
First, edit configs/preprocessing/truckscenes_bev_maps.yaml to set your dataset and output paths.
# Single GPU
python scripts/create_bev_maps.py \
--config_path configs/preprocessing/truckscenes_bev_maps.yaml
# Multi-GPU (8 GPUs)
bash scripts/run_create_bev_maps.sh 8 \
--config_path configs/preprocessing/truckscenes_bev_maps.yamlThis creates RGB appearance, semantic segmentation, and velocity maps in bird's eye view.
First, edit configs/preprocessing/truckscenes_radar_maps.yaml to set your dataset and output paths.
# Single GPU
python scripts/create_radar_maps.py \
--config_path configs/preprocessing/truckscenes_radar_maps.yaml
# Multi-GPU (8 GPUs)
bash scripts/run_create_radar_maps.sh 8 \
--config_path configs/preprocessing/truckscenes_radar_maps.yamlThis creates Point Density, RCS, and Doppler maps from the ground truth radar data.
-
Prepare the config file
Edit
configs/RadarGen_600M_img512_TS_training.yamlto set your data paths:data: dataset_dir: /path/to/man-truckscenes # TruckScenes dataset location radar_maps_dir: "radar_maps/" # Pre-computed radar maps (relative to dataset_dir) bev_conditioning_maps_dir: "bev_maps/" # Pre-computed BEV maps (relative to dataset_dir)
-
Run training
# 8 GPUs (default) bash scripts/train.sh configs/RadarGen_600M_img512_TS_training.yaml # Custom number of GPUs NUM_GPUS=4 bash scripts/train.sh configs/RadarGen_600M_img512_TS_training.yaml # Single GPU with custom arguments python scripts/train.py \ --config_path configs/RadarGen_600M_img512_TS_training.yaml \ --train.num_epochs=100
We thank the following open-source codebases for their wonderful work: SANA, DC-AE, UniDepth, UFM, Mask2Former.
This repository's code inherits its license from the following open-source projects:
- SANA: Apache-2.0
- UniDepth: CC BY-NC 4.0
- Mask2Former: MIT License
- UFM: BSD 3-Clause
The pre-trained RadarGen checkpoint inherits its license from SANA's weights and the dataset it was trained on:
- SANA weights: NSCL v2-custom
- MAN TruckScenes: CC BY-NC-SA 4.0
Please refer to the respective repositories and datasets for full license details.
If you find our work useful, please consider starring ⭐ the repository and citing our paper:
@article{borreda2025radargen,
title={RadarGen: Automotive Radar Point Cloud Generation from Cameras},
author={Borreda, Tomer and Ding, Fangqiang and Fidler, Sanja and Huang, Shengyu and Litany, Or},
journal={arXiv preprint arXiv:2512.17897},
year={2025}
}

