Skip to content

kaikai23/MaskGaussian

Repository files navigation

MaskGaussian: Adaptive 3D Gaussian Representation from Probabilistic Masks [CVPR 2025]

📣 Updates

[07/2025] 🎈: We propose mask-Grendel-GS, combining MaskGaussian and Grendel-GS to support pruning in distributed training. This framework holds potential for pruning messive gaussian points in large scale scenes, where the excessive gaussian number is a main bottleneck. https://github.com/kaikai23/mask-Grendel-GS/

[03/2025] 🎈: Post-training code is released. Now you can also directly use MaskGaussian to prune an already trained 3D-GS!

[02/2025] Accepted to CVPR 2025.

[01/2025] We release the code.

Overview

We introduce MaskGaussian to prune Gaussians while retaining reconstruction quality. It dynamically samples a subset of Gaussians to render the scene during training. Not sampled Gaussians also receive gradients through mask-diff-gaussian-rasterization and update their chance to be used in future iterations.

Our method improves rendering speed, reduces model size, GPU memory, and training time, and supports both training from scratch and post-training refinement.

image

Installation

  1. Clone the repository
git clone --recursive https://github.com/kaikai23/MaskGaussian.git
cd MaskGaussian
  1. Install dependencies
conda create -n maskgs python=3.9
conda activate maskgs
pip install "numpy<2.0" plyfile tqdm icecream torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --extra-index-url https://download.pytorch.org/whl/cu118
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit  # can be skipped if cuda-11.8 is already installed
CUDA_HOME=PATH/TO/CONDA/envs/maskgs pip install submodules/mask-diff-gaussian-rasterization submodules/diff-gaussian-rasterization submodules/simple-knn/

Data Preparation

First, create a data/ folder inside the project path by

mkdir data

The data structure will be organised as follows:

data/
├── gs_datasets
│   ├── scene1/
│   │   ├── images
│   │   │   ├── IMG_0.jpg
│   │   │   ├── IMG_1.jpg
│   │   │   ├── ...
│   │   ├── sparse/
│   │       └──0/
│   ├── scene2/
│   │   ├── images
│   │   │   ├── IMG_0.jpg
│   │   │   ├── IMG_1.jpg
│   │   │   ├── ...
│   │   ├── sparse/
│   │       └──0/
...

Public Data

  • The MipNeRF360 scenes are provided by the paper author here.
  • The SfM data sets for Tanks&Temples and Deep Blending are hosted by 3D-Gaussian-Splatting here.

Training and Evaluation in One Go

To train, render and evaluate our method on the 3 datasets in the paper, simply run:

python run_all.py

The training output is logged in train.log under the output/scene_name folder, and the final metrics are recorded in results.json under the same folder. The training time can be read from train.log, and GPU memory consumption can be read from GPU_mem card in tensorboard records by running tensorboard --logdir /path/to/output (when testing GPU memory, please use --data_device cpu). Finally, note that the output of our method is 100% in vanilla format and can be viewed directly in any 3dgs viewer, such as popular SuperSplat and antimatter15.

Training a single scene

To train a single scene, run:

python train.py -s /path/to/input_scene --eval -m /path/to/output

with optional parameters:

--lambda_mask: the coefficient of mask loss

--mask_from_iter: the start iteration for mask loss

--mask_until_iter: the end iteration for mask loss

There are 3 settings in the paper, and their configurations can be found in run_all.py.

Last, to render and evaluate the test set, run:

python render.py -m /path/to/output --skip_train
python metrics.py -m /path/to/output

Since we do not save mask, no special handling is required for evaluation.

Post-training and evaluation

To prune an already trained 3DGS, specify its checkpoint path in scripts/run_prune_finetune.sh and run:

bash scripts/run_prune_finetune.sh

LICENSE

Please follow the LICENSE of 3D-GS.

TODO List

  • Code of MaskGaussian + Taming-3DGS.
  • Support post-training.

Contact

BibTeX

@InProceedings{Liu_2025_CVPR,
    author    = {Liu, Yifei and Zhong, Zhihang and Zhan, Yifan and Xu, Sheng and Sun, Xiao},
    title     = {MaskGaussian: Adaptive 3D Gaussian Representation from Probabilistic Masks},
    booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
    month     = {June},
    year      = {2025},
    pages     = {681-690}
}

Acknowledgement

This project is built upon 3D-GS, Compact-3DGS, and LightGaussian. We thank all authors for their great work!

About

[CVPR 2025] MaskGaussian: Adaptive 3D Gaussian Representation from Probabilistic Masks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published