OccProphet is a camera-only 4D occupancy forecasting framework, offering high efficiency in both training and inference, with excellent forecasting performance.
OccProphet has the following features:
- Flexibility: OccProphet only relies on pure camera inputs, making it flexible and can be easily adapted to different traffic scenarios.
- High Efficiency: OccProphet is both training- and inference-friendly, with a lightweight Observer-Forecaster-Refiner pipeline. Minimum 1 RTX 4090 GPU works for training and inference.
- High Performance: OccProphet achieves state-of-the-art performance on three real-world 4D occupancy forecasting datasets: nuScenes, Lyft-Level5 and nuScenes-Occupancy.
- [2025/10/01] Code and checkpoints of OccProphet are released.
We follow the installation instructions in Cam4DOcc.
- Create and activate a conda environment:
conda create -n occprophet python=3.7 -y
conda activate occprophet- Install PyTorch
pip install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu113/torch_stable.html- Install GCC-6
conda install -c omgarcia gcc-6- Install MMCV, MMDetection, and MMSegmentation
pip install mmcv-full==1.4.0
pip install mmdet==2.14.0
pip install mmsegmentation==0.14.1
pip install yapf==0.40.1- Install MMDetection3D
git clone https://github.com/open-mmlab/mmdetection3d.git
cd mmdetection3d
git checkout v0.17.1
python setup.py install- Install other dependecies
pip install timm==0.9.12 huggingface-hub==0.16.4 safetensors==0.4.2
pip install open3d-python==0.7.0.0
pip install PyMCubes==0.1.4
pip install spconv-cu113
pip install fvcore
pip install setuptools==59.5.0- Install Lyft-Level5 dataset SDK
pip install lyft_dataset_sdk- Install OccProphet
cd ..
git clone https://github.com/JLChen-C/OccProphet.git
cd OccProphet
export PYTHONPATH="."
python setup.py develop
export OCCPROPHET_DIR="$(pwd)"Optional: If you encounter issues for training or inference on GMO + GSO tasks, follow the instructions below to fix the issues
- Install Numba and LLVM-Lite
pip install numba==0.55.0
pip install llvmlite==0.38.0
# Reinstall setuptools if you encounter this issue: AttributeError: module 'distutils' has no attribute 'version'
# pip install setuptools==59.5.0- Modify the files:
In Line 5, file $PATH_TO_ANACONDA/envs/occprophet/lib/python3.7/site-packages/mmdet3d-0.17.1-py3.7-linux-x86_64.egg/mmdet3d/datasets/pipelines/data_augment_utils.py:
Replace
"from numba.errors import NumbaPerformanceWarning"
with
"from numba.core.errors import NumbaPerformanceWarning"
In Line 30, $PATH_TO_ANACONDA/envs/occprophet/lib/python3.7/site-packages/nuscenes/eval/detection/data_classes.py
Replace
"self.class_names = self.class_range.keys()"
with
"self.class_names = list(self.class_range.keys())"
- Install dependecies for visualization
sudo apt-get install Xvfb
pip install xvfbwrapper
pip install mayavi-
Create your data folder
$DATAand download the datasets below to$DATA- nuScenes V1.0 full dataset
- nuScenes-Occupancy dataset, and pickle files nuscenes_occ_infos_train.pkl and nuscenes_occ_infos_val.pkl
- Lyft-Level5 dataset
-
Link the datasets to the OccProphet folder
mkdir $OCCPROPHET_DIR/data
ln -s $DATA/nuscenes $OCCPROPHET_DIR/data/nuscenes
ln -s $DATA/nuscenes-occupancy $OCCPROPHET_DIR/data/nuscenes-occupancy
ln -s $DATA/lyft $OCCPROPHET_DIR/data/lyft- Move the pickle files nuscenes_occ_infos_train.pkl and nuscenes_occ_infos_val.pkl to nuscenes dataset root:
mv $DATA/nuscenes_occ_infos_train.pkl $DATA/nuscenes/nuscenes_occ_infos_train.pkl
mv $DATA/nuscenes_occ_infos_val.pkl $DATA/nuscenes/nuscenes_occ_infos_val.pkl- The dataset structure should be organized as the file tree below:
OccProphet
βββ data/
β βββ nuscenes/
β β βββ maps/
β β βββ samples/
β β βββ sweeps/
β β βββ lidarseg/
β β βββ v1.0-test/
β β βββ v1.0-trainval/
β β βββ nuscenes_occ_infos_train.pkl
β β βββ nuscenes_occ_infos_val.pkl
β βββ nuScenes-Occupancy/
β βββ lyft/
β β βββ maps/
β β βββ train_data/
β β βββ images/ # from train images, containing xxx.jpeg
β βββ cam4docc
β β βββ GMO/
β β β βββ segmentation/
β β β βββ instance/
β β β βββ flow/
β β βββ MMO/
β β β βββ segmentation/
β β β βββ instance/
β β β βββ flow/
β β βββ GMO_lyft/
β β β βββ ...
β β βββ MMO_lyft/
β β β βββ ...- The datas generation pipeline of GMO, GSO, and other tasks is integrated in the dataloader. You can directly run the training and evaluation scripts. It may take several hours for generation of each task during the first epoch, the following epochs will be much faster.
- You can also generate the dataset without any training or inference by setting
only_generate_dataset = Truein the config file, or just adding--cfg-options model.only_generate_dataset=Trueafter your command.
- You can also generate the dataset without any training or inference by setting
- To launch the training, change your working directory to
$OCCPROPHET_DIRand run the following command:
CUDA_VISIBLE_DEVICES=$YOUR_GPU_IDS PORT=$PORT bash run.sh $CONFIG $NUM_GPUS- Argument explanation:
Β Β Β Β$YOUR_GPU_IDS: The GPU ids you want to use
Β Β Β Β$PORT: The connection port of distributed training
Β Β Β Β$CONFIG: The config path
Β Β Β Β$NUM_GPUS: The number of available GPUs
For example, you can launch the training on GPUs 0, 1, 2, and 3 with the config file ./projects/configs/occprophet/OccProphet_4x1_inf-GMO_nuscenes.py as follows:
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=26000 bash run.sh ./projects/configs/occprophet/OccProphet_4x1_inf-GMO_nuscenes.py 4- Optional: The default is set to
$2\times$ data.samples_per_gpufor faster data loading. If the training is stopped due to out of cpu memory, you can try to set thedata.workers_per_gpu=1in the config file, or just adding--cfg-options data.workers_per_gpu=1after your command:
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=26000 bash run.sh ./projects/configs/occprophet/OccProphet_4x1_inf-GMO_nuscenes.py 4 --cfg-options data.workers_per_gpu=1To launch the evaluation, change your working directory to $OCCPROPHET_DIR and run the following command:
CUDA_VISIBLE_DEVICES=$YOUR_GPU_IDS PORT=$PORT bash run_eval.sh $CONFIG $CHECKPOINT $NUM_GPUS --evaluate-
Argument explanation:
Β Β Β Β$YOUR_GPU_IDS: The GPU ids you want to use
Β Β Β Β$PORT: The connection port of distributed evaluation
Β Β Β Β$CONFIG: The config path
Β Β Β Β$CHECKPOINT: The checkpoint path
Β Β Β Β$NUM_GPUS: The number of available GPUs -
For example, you can launch the evaluation on GPUs 0, 1, 2, and 3 with the config file
./projects/configs/occprophet/OccProphet_4x1_inf-GMO_nuscenes.pyas follows:
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=26006 bash run_eval.sh ./projects/configs/occprophet/OccProphet_4x1_inf-GMO_nuscenes.py ./work_dirs/occprophet/OccProphet_4x1_inf-GMO_nuscenes/OccProphet_4x1_inf-GMO_nuscenes.pth 4-
The default evaluation measure the IoU of all future frames, you can change the evaluated time horizon by modifying the following settings in the config file or just adding them after your command.
- For example, if you want to evaluate the IoU of the present frame, you can set
model.test_present=Truein the config file, or just adding--cfg-options model.test_present=Trueafter your command:
- For example, if you want to evaluate the IoU of the present frame, you can set
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=26006 bash run_eval.sh ./projects/configs/occprophet/OccProphet_4x1_inf-GMO_nuscenes.py ./work_dirs/occprophet/OccProphet_4x1_inf-GMO_nuscenes/OccProphet_4x1_inf-GMO_nuscenes.pth 4 --cfg-options model.test_present=True- Fine-grained Evaluation: you can evaluate the IoU of the X-th frame by setting
model.test_time_indices=Xin the config file, or just adding--cfg-options model.test_time_indices=Xafter your command.
For example, if you want to evaluate the IoU of the 5-th frame from the last, you can setmodel.test_time_indices=-5in the config file, or just adding--cfg-options model.test_time_indices=-5after your command:
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=26006 bash run_eval.sh ./projects/configs/occprophet/OccProphet_4x1_inf-GMO_nuscenes.py ./work_dirs/occprophet/OccProphet_4x1_inf-GMO_nuscenes/OccProphet_4x1_inf-GMO_nuscenes.pth 4 --cfg-options model.test_time_indices=-5- Additional: If you want to save the prediction results to
YOUR_RESULT_DIR, you can setmodel.save_pred=True model.save_path=YOUR_RESULT_DIRin the config file, or just adding--cfg-options model.save_pred=True model.save_path=YOUR_RESULT_DIRafter your command:
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=26006 bash run_eval.sh ./projects/configs/occprophet/OccProphet_4x1_inf-GMO_nuscenes.py ./work_dirs/occprophet/OccProphet_4x1_inf-GMO_nuscenes/OccProphet_4x1_inf-GMO_nuscenes.pth 4 --cfg-options model.save_pred=True model.save_path=YOUR_RESULT_DIR- Optional: The default
data.workers_per_gpuis set to$2\times$ data.samples_per_gpufor faster data loading. If the training is stopped due to out of cpu memory, you can try to set thedata.workers_per_gpu=1in the config file, or just adding--cfg-options data.workers_per_gpu=1after your command:
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=26006 bash run_eval.sh ./projects/configs/occprophet/OccProphet_4x1_inf-GMO_nuscenes.py ./work_dirs/occprophet/OccProphet_4x1_inf-GMO_nuscenes/OccProphet_4x1_inf-GMO_nuscenes.pth 4 --cfg-options data.workers_per_gpu=1Please note the the VPQ metric of 3D instance prediction are the raw model outputs without refinement proposed in Cam4DOcc.
Ground-Truth Occupancy Labels: To visualize the ground-truth occupancy labels, you can run the following command to save the visualization results to $SAVE_DIR (default is $OCCPROPHET_DIR/viz), where $PRED_DIR is the directory of the prediction results:
cd $OCCPROPHET_DIR/viz
python viz_pred.py --pred_dir $PRED_DIR --save_dir $SAVE_DIR- If you want to visualize the changes across different frames, you can add
--show_by_frameafter your command:
cd $OCCPROPHET_DIR/viz
python viz_pred.py --pred_dir $PRED_DIR --save_dir $SAVE_DIR --show_by_frameOccupancy Forecasting Results: To visualize the occupancy forecasting results in $PRED_DIR, you can run the following command to save the visualization results to $SAVE_DIR:
cd $OCCPROPHET_DIR/viz
python viz_pred.py --pred_dir $PRED_DIR --save_dir $SAVE_DIR- If you want to visualize the changes across different frames, you can add
--show_by_frameafter your command:
cd $OCCPROPHET_DIR/viz
python viz_pred.py --pred_dir $PRED_DIR --save_dir $SAVE_DIR --show_by_frameOccProphet supports all 5 tasks:
- Inflated GMO on nuScenes dataset
- Inflated GMO on Lyft Level 5 dataset
- Fine-grained GMO on nuScenes-Occupancy dataset
- Inflated GMO and Fine-grained GSO on nuScenes and nuScenes-Occupancy datasets
- Fine-grained GMO and Fine-grained GSO on nuScenes-Occupancy dataset
The configs and checkpoints of all 5 tasks are released and can be accessed through the links in the table below:
| Task | Dataset | Config | Model |
|---|---|---|---|
| Inflated GMO | nuScenes | OccProphet_4x1_inf-GMO_nuscenes.py | OccProphet_4x1_inf-GMO_nuscenes.pth |
| Inflated GMO | Lyft-Level5 | OccProphet_4x1_inf-GMO_lyft-level5.py | OccProphet_4x1_inf-GMO_lyft-level5.pth |
| Fine-grained GMO | nuScenes-Occupancy | OccProphet_4x1_fine-GMO_nuscenes-occ.py | OccProphet_4x1_fine-GMO_nuscenes-occ.pth |
| Inflated GMO, Fine-grained GSO | nuScenes, nuScenes-Occupancy | OccProphet_fp16_4x1_inf-GMO+fine-GSO.py | OccProphet_fp16_4x1_inf-GMO+fine-GSO_nuscenes_nuscenes-occ.pth |
| Fine-grained GMO, Fine-grained GSO | nuScenes-Occupancy | OccProphet_fp16_4x1_fine-GMO+fine-GSO_nuscenes_nuscenes-occ.py | OccProphet_fp16_4x1_fine-GMO+fine-GSO_nuscenes-occ.pth |
If you are interested in OccProphet, or find it useful to to your work, please feel free to give us a star β or cite our paper π:
@article{chen2025occprophet,
title={Occprophet: Pushing efficiency frontier of camera-only 4d occupancy forecasting with observer-forecaster-refiner framework},
author={Chen, Junliang and Xu, Huaiyuan and Wang, Yi and Chau, Lap-Pui},
journal={arXiv preprint arXiv:2502.15180},
year={2025}
}We thank Cam4DOcc for their significant contribution to end-to-end 4D occupancy forecasting community. We develop our codebase upon their excellent work.
- 3D-Occupancy-Perception: A Survey on Occupancy Perception for Autonomous Driving: The Information Fusion Perspective