This is the official codebase for MaterialSeg3D, a novel apporach for generating surface PBR material information for 3D assets through 2D prior knowledge.
Motivated by the fact that expert 3D modelers tend to manually apply surface PBR material information based on their prior knowledge of the material information, we manage to utilize 2D perception-based method to learn material information from existing 2D images from public websites and datasets. We conduct a single-object material segmentation dataset Materialized Individual Objects (MIO), and propose a novel workflow that can automatically predict the surface material information of the given 3D assets, named MaterialSeg3D.
Our MIO dataset can be access through Google Drive.
The Roughness and Metalness value for each material class can be found in MIO.pkl with the key coordinates.
Our MIO++ dataset can be access through Google Drive.
- Download MaterialSeg3D
git clone https://github.com/PROPHETE-pro/MaterialSeg3D.git
cd MaterialSeg3D
conda create -n MaterialSeg3D python==3.9.15
conda activate MaterialSeg3D- Install Text2Tex dependencies
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install -c bottler nvidiacub
conda install pytorch3d -c pytorch3d
conda install xformers -c xformers
cd Text2Tex
pip install -r requirements.txt
cd ..It is necessary to download control_sd15_depth.pth from the hugging face page, and put it under ./Text2Tex/models/ControlNet/models/.
Update on 2025.04.19: When running pip install -r requirements.txt, the packagepytorch-lightning==1.9.1will automatically update your torch version to the latest, as torch is not installed by pip. You should remove this package from requirements.txt, and manually run conda install pytorch-lightning=1.9.1 pytorch=1.12.1 -c conda-forge instead. huggingface_hub is also required to be downgraded, run pip install huggingface_hub==0.22.2 after installing transformers.
- Install GET3D dependencies
pip install ninja xatlas gdown
pip install git+https://github.com/NVlabs/nvdiffrast/
pip install meshzoo ipdb imageio gputil h5py point-cloud-utils imageio imageio-ffmpeg==0.4.4 pyspng==0.1.0
pip install urllib3
pip install scipy
pip install click
pip install tqdm
pip install opencv-python==4.5.4.58- Build Blender environment
# cd MaterialSeg3D
wget https://ftp.halifax.rwth-aachen.de/blender/release/Blender2.90/blender-2.90.0-linux64.tar.xz
tar -xvf blender-2.90.0-linux64.tar.xz
cd blender-2.90.0-linux64/2.90/python/bin
./python3.7m -m ensurepip
./python3.7m -m pip install numpy - Install MMSegmentation
pip install -U openmim
mim install mmengine
mim install "mmcv==2.1.0"
cd mmsegmentation
pip install -v -e .- Download segmentation model weights from GoogleDrive, unzip and rename to
./work_dirand place it underMaterialSeg3D/mmsegmentation/.
- Our method is tested on A30 & A100. The provided version for important packages is based on our server status, if there are errors occur in the installation process, please refer to the version below.
- Python == 3.9.15
- Pytorch == 1.12.1
- mmcv == 2.1.0
- mmengine == 0.10.3
- mmseg == 1.2.2
- Our workflow requires the environment to be compatible on GET3D, Text2Tex, MMSegmentation.
- Updated 2025.04.19 The
pip listof an applicable environment is uploaded atMaterialSeg3d-envs, please refer to the versions if necessary.
cd MaterialSeg3D
python gradio_demo.py- Update the directory to your own path, find
/path-to-MaterialSeg3D/. The files need to be modified includegradio_demo.py,./GET3D/render_shapenet_data/render_shapenet.py,./Text2Tex/scripts/view_2_UV.py - Please provide the directory of your 3D asset folder
./example/path_to_obj_file/. Note that the folder should only contain one .obj mesh file and a Albedo RGB UV .png file, and an .mtl file. The folder name should match the name of the .obj file (i.e../example/car/car.obj) - Select the category of your asset. We currently support
car, furniture, building, instrument, plantfor prediction. - The stage output files can be found in
/output, and the final generated ORM UV and materialized asset can be found in the object folder. - The display quality of gr.Model3D is not satisfying enough, so we highly recommend you to render the asset in UE5 or Blender with the generated ORM UV map.
@article{2024MaterialSeg3D,
title={MaterialSeg3D: Segmenting Dense Materials from 2D Priors for 3D Assets},
author={ Li, Zeyu and Gan, Ruitong and Luo, Chuanchen and Wang, Yuxi and Liu, Jiaheng and Zhang, Ziwei Zhu Man and Li, Qing and Yin, Xucheng and Zhang, Zhaoxiang and Peng, Junran },
year={2024},
}@article{li2024materialseg3d,
title={MaterialSeg3D: Segmenting Dense Materials from 2D Priors for 3D Assets},
author={Li, Zeyu and Gan, Ruitong and Luo, Chuanchen and Wang, Yuxi and Liu, Jiaheng and Zhang, Ziwei Zhu Man and Li, Qing and Yin, Xucheng and Zhang, Zhaoxiang and Peng, Junran},
journal={arXiv preprint arXiv:2404.13923},
year={2024}
}


