Skip to content

This repository contains the code for CVPRW 2024 paper: Generating Material-Aware 3D Models from Sparse Views

License

Notifications You must be signed in to change notification settings

Sheldonmao/MatSparse3D

Repository files navigation

MatSparse3D

PDF

This repository contains the code for PBDL 2024 (CVPR Workshop) paper: Generating Material-Aware 3D Models from Sparse Views

MatSparse3D introduces a novel approach to generate material-aware 3D models from sparse-view images using generative models and efficient pre-integrated rendering. The output of our method is a relightable model that independently models geometry, material, and lighting, enabling downstream tasks to manipulate these components separately.

fig_nv_geo

Install

create envrionment using mamba or conda. And install additional packages using pip.

mamba env create --file environment.yml
pip install -r requirements_git+.txt 

Our method use pretrained Zero123-XL model, the pretrained Zero123-XL weights are required to be download and save in load/zero123:

mkdir load/zero123
cd load/zero123
wget https://zero123.cs.columbia.edu/assets/zero123-xl.ckpt

Datasets

  • DTU-MVS: Download the DTU sample set from DTU-MVS. The depth and masking can be estimated by running preprocessing.
  • sparse-relight: We provide our generated sparse-relight scenes on Google Drive. The provided data has estimated depth without the need for preoprocessing
 DATA/
 ├── DTU-MVS/
 │   └── SampleSet
 ├── sparse-relight/
     ├── light-probes/
     ├── mesh/
     └── synthesis-images/
          ├── cartooncar/
          ├── gramophone/
          └── ......

Preprocessing (optional)

Omnidata is used for depth and normal prediction. The following ckpts are hardcoded in preprocess_image.py and required to be downloaded:

mkdir load/omnidata
cd load/omnidata
# dowload omnidata_dpt_depth_v2.ckpt
gdown '1Jrh-bRnJEjyMCS7f-WsaFlccfPjJPPHI&confirm=t' 
## optionally download omnidata_dpt_normal_v2.ckpt for normal prediction
# gdown '1wNxVO4vVbDEMEpnAi_jwQObf2MFodcBR&confirm=t' 

Preprocessing data, with optional --use_normal to estimate the normal if needed.

# e.g. for sparse-relight data
python preprocess.py --scene_dir DATA/sparse-relight/cartooncar

# e.g. for DTU data
python preprocess_DTU.py --scene_dir DATA/DTU-MVS/SampleSet/MVS-Data/Rectified/scan56

Training

Start training by running launch.py with --train flag.

  • Number of train views can be specified by data.tran_views.
  • Config file for DTU and sparse-relightdataset are available

For more settings, please refer to the config file.

# to train the proposed MatSparse3D model on Sparse-Relight dataset
python launch.py --config configs/matsparse3d_sparserelight.yaml --train --gpu 0 data.train_views=5

# to train the proposed MatSparse3D model on DTU dataset
python launch.py --config configs/matsparse3d_DTU.yaml --train --gpu 0 data.train_views=5

# to train the Zero123-n model
python launch.py --config configs/zero123n_sparserelight.yaml --train --gpu 0 data.train_views=5

# to train the nvdiffrec-n model
python launch.py --config configs/nvdiffrec_sparserelight.yaml --train --gpu 0 data.train_views=5

Model Evaluations

Rendering Interpolated Views

use --validate to render interpolated views, an examle is provided in script_test.sh

# beware to specify the exp_folder
bash script_validate.sh 
CartoonCar_val.mp4

Testing

use --test to evaluate on testing views. For sparse-relight dataset, relighting result is also reported. An examle is provided in script_test.sh

# beware to specify the exp_folder
bash script_test.sh 
CartoonCar_test_relight.mp4

Exporting Mesh

use --export to export geometry from trained model, an examle is provided in script_export.sh

# beware to specify the exp_folder
bash script_export.sh 
drawing

Credits

Matsparse3D is built on the following open-source projects:

  • Threestudio A unified framework for 3D content creation
  • stable-dreamfusion for Zero-1-to-3 implementation
  • NeuSPIR Learning Relightable Neural Surface using Pre-Integrated Rendering

Citation

If MatSparse3D is relevant to your project, please cite our associated paper:

@InProceedings{mao2023matsparse3d,
  author={Mao, Shi and Wu, Chenming and Yi, Ran and Shen, Zhelun and Zhang, Liangjun and Heidrich, Wolfgang},
  title={Generating Material-Aware 3D Models from Sparse Views},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
  year={2024},
  month={June},
}

About

This repository contains the code for CVPRW 2024 paper: Generating Material-Aware 3D Models from Sparse Views

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published