Skip to content
/ C3-GS Public

[BMVC 2025] C³-GS: Learning Context-aware, Cross-dimension, Cross-scale Feature for Generalizable Gaussian Splatting

License

Notifications You must be signed in to change notification settings

YuhsiHu/C3-GS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

${C}^{3}$-GS


Publication Paper Pytorch

📌 Introduction

This repository contains the official implementation of our BMVC 2025 paper: ${C}^{3}$-GS: Learning Context-aware, Cross-dimension, Cross-scale Feature for Generalizable Gaussian Splatting.

🔧 Setup

1.1 Requirements

Use the following commands to create a conda environment and install the required packages:

conda create -n c3gs python=3.7.13
conda activate c3gs
pip install -r requirements.txt
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 -f https://download.pytorch.org/whl/torch_stable.html

Install Gaussian Splatting renderer. We are using the MVPGS implementation which returns both rendered image and depth:

git clone https://github.com/zezeaaa/MVPGS.git --recursive
pip install MVPGS/submodules/diff-gaussian-rasterization
pip install MVPGS/submodules/simple-knn

1.2 Datasets

🧠 Training

This implementation is built upon the MVSGaussian framework, with our modules and improvements integrated into its existing pipeline.

To maintain compatibility, we preserve the original directory and command structure (e.g., paths under mvsgs/...).

2.1 Training on DTU

To train a generalizable model from scratch on DTU, specify data_root in configs/mvsgs/dtu_pretrain.yaml first and then run:

python train_net.py --cfg_file configs/mvsgs/dtu_pretrain.yaml train.batch_size 4

More details can be found in the MVSGaussian codebase.

2.2 Per-scene optimization

One strategy is to optimize only the initial Gaussian point cloud provided by the generalizable model.

bash scripts/mvsgs/llff_ft.sh
bash scripts/mvsgs/nerf_ft.sh
bash scripts/mvsgs/tnt_ft.sh

More details can be found in the MVSGaussian codebase.

📊 Testing

3.1 Evaluation on DTU

Use the following command to evaluate the model on DTU:

python run.py --type evaluate --cfg_file configs/mvsgs/dtu_pretrain.yaml mvsgs.cas_config.render_if False,True mvsgs.cas_config.volume_planes 48,8 mvsgs.eval_depth True

The rendered images will be saved in result/mvsgs/dtu_pretrain.

3.2 Evaluation on Real Forward-facing

python run.py --type evaluate --cfg_file configs/mvsgs/llff_eval.yaml

3.3 Evaluation on NeRF Synthetic

python run.py --type evaluate --cfg_file configs/mvsgs/nerf_eval.yaml

3.4 Evaluation on Tanks and Temples

python run.py --type evaluate --cfg_file configs/mvsgs/tnt_eval.yaml

📜 Citation

If you find this work useful in your research, please cite:

@inproceedings{hu2025c3gs,
  title     = {{$C^3$-GS: Learning Context-aware, Cross-dimension, Cross-scale Feature for Generalizable Gaussian Splatting}},
  author    = {Hu, Yuxi and Zhang, Jun and Chen, Kuangyi and Zhang, Zhe and Fraundorfer, Friedrich},
  booktitle = {British Machine Vision Conference (BMVC)},
  year      = {2025}
}

⚙️ Notice

To support practical usage on a single GPU (e.g., RTX 3090/4090), the released code applies the Transformer only at the low-resolution level.

Compared to the original paper setting (Transformer at both low and high resolution), this variant trades off a small drop in accuracy for significantly better efficiency and scalability.

❤️ Acknowledgements

This repository builds on the excellent works of MVSGaussian, MVSplat. We sincerely thank the authors for their contributions to the community.

About

[BMVC 2025] C³-GS: Learning Context-aware, Cross-dimension, Cross-scale Feature for Generalizable Gaussian Splatting

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published