Skip to content

PeterHUistyping/M3ashy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Project page arXiv Base model weights NeuMERL dataset Python

M3ashy: Multi-Modal Material Synthesis via Hyperdiffusion

AAAI 2026

Chenliang Zhou, Zheyuan Hu, Alejandro Sztrajman, Yancheng Cai, Yaru Liu, Cengiz Öztireli

Department of Computer Science and Technology
University of Cambridge

[Project page] [Paper] [Base model weights] [NeuMERL dataset]

teaser

Figure: 3D models and scenes rendered with our synthesized neural materials demonstrate visually rich results.

teaser

This project is formerly known as NeuMaDiff: Neural Material Synthesis via Hyperdiffusion.

Abstract

High-quality material synthesis is essential for replicating complex surface properties to create realistic scenes. Despite advances in the generation of material appearance based on analytic models, the synthesis of real-world measured BRDFs remains largely unexplored.

To address this challenge, we propose M3ashy, a novel multi-modal material synthesis framework based on hyperdiffusion. M3ashy enables high-quality reconstruction of complex real-world materials by leveraging neural fields as a compact continuous representation of BRDFs. Furthermore, our multi-modal conditional hyperdiffusion model allows for flexible material synthesis conditioned on material type, natural language descriptions, or reference images, providing greater user control over material generation.

To support future research, we contribute two new material datasets and introduce two BRDF distributional metrics for more rigorous evaluation. We demonstrate the effectiveness of M3ashy through extensive experiments, including a novel statistics-based constrained synthesis, which enables the generation of materials of desired categories.

Dataset and base model

For material synthesis, the weights of the pre-trained base models are uploaded at Hugging Face Synthesis model weights. Please download the model weights and put them in the model folder. (see details here).

Our NeuMERL dataset are uploaded at AI community Hugging Face NeuMERL dataset. Please download the model weights and put them in the data/NeuMERL folder (see details here).

  • Release of neural augmented MERL BRDF (NeuMERL) dataset.
  • Release of pre-trained model weights.
  • Release of codebase with README, and Python notebook.

Installation

Environment: Python 3.10.15 or other compatible versions.

The Pytorch device is set as descending order of CUDA, Apple MPS and CPU (see device.py).

pip install -r requirements.txt

How to run

See the interactive Python notebook - NeuMaDiff.ipynb for a step-by-step guide, after downloading the data and model weights.

NeuMaDiff: Neural Material Synthesis via Hyperdiffusion

    1. Create the output folder.
# output folder
mkdir -p output/generation/

either

python src/pytorch/train.py --file_index -1  --pytorch_model_type 2 --sample 1 --model_weights_path model/NeuMaDiff-diversity.pth

or

python src/pytorch/train.py --file_index -1  --pytorch_model_type 2 --sample 1 --model_weights_path model/NeuMaDiff-quality.pth
    1. Create folders for generated materials.
mkdir -p output/generation/mlp/mlp_gen{0..120}
    1. Extract the MLP model from the npy file.
python src/tools/merl_workflow/read_mlp_weight.py --file_index -1
    1. Infer the binary files of the synthesized materials from the MLP model, following MERL format.
python src/tools/merl_workflow/write_merl_binary.py --file_index -1
    1. Rendering with the synthesized materials.

We use Mitsuba, a physically based renderer, to render the 3D models with the synthesized materials. You may find Neural-BRDF helpful.

  • [Optionally] train a new from scratch.
python src/pytorch/train.py --file_index -1  --pytorch_model_type 2

Evaluation of synthesized materials

Please update the folder and filename to .binary files or render images of reference and synthesized sets.

There are two underlying distance metrics: BRDF space and image space.

    1. For BRDF space, the demo use data/merl/blue-metallic-paint.binary from MERL dataset.
python src/eval/metrics.py --is_brdf_space 1 --refer_set_size 1  --reference_folder_path "data/merl/" --sample_set_size 1  --sample_folder_path "data/merl/" 
    1. For image space, the demo use output/img/ folder with rendered images.
python src/eval/metrics.py --is_brdf_space 0 --refer_set_size 1 --reference_img_path "output/img/" --sample_set_size 1 --sample_img_path "output/img/"

[Optional] NeuMERL: Training MLP from scratch

To train the NeuMERL from scratch, please download MERL dataset from MERL and put the binary files in the data/merl folder (see details here). Please download the initial model weights and put them in the model folder (see details here).

    1. Create the output folder.
# output folder
mkdir -p output/merl/merl_1/blue-metallic-paint/
    1. Train a NeuMERL MLP model from scratch (file_index = {1, 2, ..., 24}).
python src/pytorch/train.py --pytorch_model_type 1 --file_index 1  
  • [Optionally] train multiple models in a loop.
# For all 24 * 100 materials
bash src/tools/create_folder.sh

Set File_index = {1, 2, ..., 24} and set from_list = 1,

python src/pytorch/train.py --pytorch_model_type 1  --file_index 1   --from_list 1

Remark: Each file contain the filenames of 100 materials , and the total number of materials is 24 * 100. The first 1-6 includes 6 * 100 MERL original materials after color channel permutation, and the rest 7-12, 13-18, 19-24 are the interpolated materials.

    1. Generated the concatenated npy file from the MLP model weights, which is the NeuMERL dataset.
python src/tools/merl_workflow/read_mlp_weight.py --file_index 1

Citation

Please feel free to contact us if you have any questions or suggestions.

If you found the paper or code useful, please consider citing:

@inproceedings{
    M3ashy2026, 
    author = {Chenliang Zhou and Zheyuan Hu and Alejandro Sztrajman and Yancheng Cai and Yaru Liu and Cengiz Oztireli}, 
    title = {M$^{3}$ashy: Multi-Modal Material Synthesis via Hyperdiffusion}, 
    year = {2026}, 
    booktitle = {Proceedings of the 40th AAAI Conference on Artificial Intelligence}, 
    location = {Singapore}, 
    series = {AAAI'26} 
}

Acknowledgement: We are thankful to the references and the open-source community for their valuable contributions (see our paper and repo License for a detailed list of references).

About

M^3ashy: Multi-Modal Material Synthesis via Hyperdiffusion, AAAI'26 (former NeuMaDiff).

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •