Skip to content

Official implementation of NeurIPS 2024 paper: "FreeSplat: Generalizable 3D Gaussian Splatting Towards Free-View Synthesis of Indoor Scenes"

Notifications You must be signed in to change notification settings

wangys16/FreeSplat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FreeSplat: Generalizable 3D Gaussian Splatting Towards Free-View Synthesis of Indoor Scenes Reconstruction

Yunsong Wang, Tianxin Huang, Hanlin Chen, Gim Hee Lee

NeurIPS 2024

Logo

🌏 News

[01 April 2025] We have introduced an enhanced version, FreeSplat++, to specifically focus on feed-forward whole scene reconstruction with 3D Gaussians. You may want to take a look at its Project page, Arxiv and Code.

[01 April 2025] We updated the code for training evaluation on RE10K dataset.

[12 May 2025] We fixed the erros in training when using batch size > 1.


📃 Abstract

FreeSplat is a generalizable 3DGS method for indoor scene reconstruction, which leverages low-cost 2D backbones for feature extraction and cost volume for multi-view aggregation. Furthermore, FreeSplat proposes a Pixel-wise Triplet Fusion (PTF) module to merge multi-view 3D Gaussians, such that to remove those redundant ones and provide point-level latent fusion and regularization on Gaussian localization. FreeSplat shows consistent quality and efficiency improvements especially when given large numbers of input views.


📌 Installation

To get started, create a virtual environment using Python 3.10+:

git clone https://github.com/wangys16/FreeSplat.git
cd FreeSplat
conda create -n freesplat python=3.10
conda activate freesplat
pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt

If your system does not use CUDA 12.1 by default, see the troubleshooting tips below from pixelSplat.

Troubleshooting

The Gaussian splatting CUDA code (diff-gaussian-rasterization) must be compiled using the same version of CUDA that PyTorch was compiled with. As of December 2023, the version of PyTorch you get when doing pip install torch was built using CUDA 12.1. If your system does not use CUDA 12.1 by default, you can try the following:

  • Install a version of PyTorch that was built using your CUDA version. For example, to get PyTorch with CUDA 11.8, use the following command (more details here):
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
  • Install CUDA Toolkit 12.1 on your system. One approach (try this at your own risk!) is to install a second CUDA Toolkit version using the runfile (local) option here. When you run the installer, disable the options that install GPU drivers and update the default CUDA symlinks. If you do this, you can point your system to CUDA 12.1 during installation as follows:
LD_LIBRARY_PATH=/usr/local/cuda-12.1/lib64 pip install -r requirements.txt
# If everything else was installed but you're missing diff-gaussian-rasterization, do:
LD_LIBRARY_PATH=/usr/local/cuda-12.1/lib64 pip install git+https://github.com/dcharatan/diff-gaussian-rasterization-modified

📁 Acquiring Datasets

FreeSplat is trained using about 100 scenes from ScanNet following NeRFusion and SurfelNeRF, and evaluated on ScanNet and Replica datasets.

You can download our preprocessed ScanNet and Replica here. For RE10K and ACID downloading, you should follow pixelSplat. The downloaded datasets under path datasets/ should look like:

datasets
├─ scannet
│  ├─ train
│  ├  ├─scene0005_00
|  ├  ├  ├─ color (RGB images)
│  ├  ├  ├─ depth (depth images)
│  ├  ├  ├─ intrinsic (intrinsics)
│  ├  ├  └─ extrinsics.npy (camera extrinsics)
│  ├  ├─ scene0020_00
│  ├  ...
│  ├─ test
│  ├  ├─
│  ├  ...
│  ├─ train_idx.txt (training scenes list)
│  └─ test_idx.txt (testing scenes list)
├─ replica
│  ├─ test
│  └─ test_idx.txt (testing scenes list)
├─ re10k
│  ├─ train
│  ├─ test
├─ acid
│  ├─ train
│  ├─ test

Our sampled views for evaluation on different settings are in assets/evaluation_index_{dataset}_{N}views.json.

Acquiring Pre-trained Checkpoints

You can find our pre-trained checkpoints here and download them to path checkpoints/.

🎮 Running the Code

Training

ScanNet

The main entry point is src/main.py. To train FreeSplat on 2-views, 3-views, and FVT settings, you can respectively call:

python -m src.main +experiment=scannet/2views +output_dir=train_2views
python -m src.main +experiment=scannet/3views +output_dir=train_3views
python -m src.main +experiment=scannet/fvt +output_dir=train_fvt

The output will be saved in path outputs/***.

RE10K

To train on RE10K, you can use:

python -m src.main +experiment=re10k/2views +output_dir=train_re10k_2views
python -m src.main +experiment=re10k/fvt +output_dir=train_re10k_fvt

to train on 2-views or FVT setting.

Evaluation

ScanNet and Replica

To evaluate pre-trained model on the [N]-views setting on [DATASET], you can call:

python -m src.main +experiment=[DATASET]/[SETTING] +output_dir=[OUTPUT_PATH] mode=test dataset/view_sampler=evaluation checkpointing.load=[PATH_TO_CHECKPOINT] dataset.view_sampler.num_context_views=[N]

For example, to evaluate 2-views trained FreeSplat:

python -m src.main +experiment=scannet/2views +output_dir=test_scannet_2views mode=test dataset/view_sampler=evaluation checkpointing.load=checkpoints/2views.ckpt dataset.view_sampler.num_context_views=2

To evaluate FreeSplat-fvt on ScanNet 10-views setting, you can run:

python -m src.main +experiment=scannet/fvt +output_dir=test_scannet_fvt mode=test dataset/view_sampler=evaluation checkpointing.load=checkpoints/fvt.ckpt dataset.view_sampler.num_context_views=10 model.encoder.num_views=9

Here model.encoder.num_views=9 is to use more nearby views for more accurate depth estimation. We also provide a whole scene reconstruction example that you can possibly run by:

python -m src.main +experiment=scannet/fvt +output_dir=test_scannet_whole mode=test dataset/view_sampler=evaluation checkpointing.load=checkpoints/fvt.ckpt dataset.view_sampler.num_context_views=30 model.encoder.num_views=30

RE10K

To evaluate 2-views results of your trained re10k_2views.ckpt checkpoint on RE10K, you can run:

python -m src.main +experiment=re10k/2views +output_dir=test_re10k_2views dataset/view_sampler=evaluation checkpointing.load=checkpoints/re10k_2views.ckpt dataset.view_sampler.num_context_views=2

Camera Ready Updates

  1. Our current version directly uses the features extracted by the backbone to conduct multi-view matching, achieving faster training and better performance with slight GPU overhead.
  2. For Gaussian Splatting codebase, we now follow diff-gaussian-rasterization-w-depth for more accurate depth rendering.

🖊 BibTeX

If you find our work helpful, please consider citing our papers. Thank you!

@article{wang2025freesplat++,
  title={FreeSplat++: Generalizable 3D Gaussian Splatting for Efficient Indoor Scene Reconstruction},
  author={Wang, Yunsong and Huang, Tianxin and Chen, Hanlin and Lee, Gim Hee},
  journal={arXiv preprint arXiv:2503.22986},
  year={2025}
}
@article{wang2024freesplat,
  title={FreeSplat: Generalizable 3D Gaussian Splatting Towards Free-View Synthesis of Indoor Scenes},
  author={Wang, Yunsong and Huang, Tianxin and Chen, Hanlin and Lee, Gim Hee},
  journal={arXiv preprint arXiv:2405.17958},
  year={2024}
}

📜 Acknowledgements

Our code is largely based on pixelSplat, and our implementation also referred to SimpleRecon and MVSplat. Thanks for their great works!

About

Official implementation of NeurIPS 2024 paper: "FreeSplat: Generalizable 3D Gaussian Splatting Towards Free-View Synthesis of Indoor Scenes"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages