AAAI 2026
Chenliang Zhou, Zheyuan Hu, Alejandro Sztrajman, Yancheng Cai, Yaru Liu, Cengiz Öztireli
Department of Computer Science and Technology
University of Cambridge
[Project page] [Paper] [Base model weights] [NeuMERL dataset]
Figure: 3D models and scenes rendered with our synthesized neural materials demonstrate visually rich results.
This project is formerly known as NeuMaDiff: Neural Material Synthesis via Hyperdiffusion.
High-quality material synthesis is essential for replicating complex surface properties to create realistic scenes. Despite advances in the generation of material appearance based on analytic models, the synthesis of real-world measured BRDFs remains largely unexplored.
To address this challenge, we propose M3ashy, a novel multi-modal material synthesis framework based on hyperdiffusion. M3ashy enables high-quality reconstruction of complex real-world materials by leveraging neural fields as a compact continuous representation of BRDFs. Furthermore, our multi-modal conditional hyperdiffusion model allows for flexible material synthesis conditioned on material type, natural language descriptions, or reference images, providing greater user control over material generation.
To support future research, we contribute two new material datasets and introduce two BRDF distributional metrics for more rigorous evaluation. We demonstrate the effectiveness of M3ashy through extensive experiments, including a novel statistics-based constrained synthesis, which enables the generation of materials of desired categories.
For material synthesis, the weights of the pre-trained base models are uploaded at Hugging Face Synthesis model weights. Please download the model weights and put them in the model folder. (see details here).
Our NeuMERL dataset are uploaded at AI community Hugging Face NeuMERL dataset. Please download the model weights and put them in the data/NeuMERL folder (see details here).
- Release of neural augmented MERL BRDF (NeuMERL) dataset.
- Release of pre-trained model weights.
- Release of codebase with README, and Python notebook.
Environment: Python 3.10.15 or other compatible versions.
The Pytorch device is set as descending order of CUDA, Apple MPS and CPU (see device.py).
pip install -r requirements.txtSee the interactive Python notebook - NeuMaDiff.ipynb for a step-by-step guide, after downloading the data and model weights.
-
- Create the output folder.
# output folder
mkdir -p output/generation/-
- Sample synthetic materials from the pre-trained synthesis model weights.
either
python src/pytorch/train.py --file_index -1 --pytorch_model_type 2 --sample 1 --model_weights_path model/NeuMaDiff-diversity.pthor
python src/pytorch/train.py --file_index -1 --pytorch_model_type 2 --sample 1 --model_weights_path model/NeuMaDiff-quality.pth-
- Create folders for generated materials.
mkdir -p output/generation/mlp/mlp_gen{0..120}-
- Extract the MLP model from the npy file.
python src/tools/merl_workflow/read_mlp_weight.py --file_index -1-
- Infer the binary files of the synthesized materials from the MLP model, following MERL format.
python src/tools/merl_workflow/write_merl_binary.py --file_index -1-
- Rendering with the synthesized materials.
We use Mitsuba, a physically based renderer, to render the 3D models with the synthesized materials. You may find Neural-BRDF helpful.
- [Optionally] train a new from scratch.
python src/pytorch/train.py --file_index -1 --pytorch_model_type 2Please update the folder and filename to .binary files or render images of reference and synthesized sets.
There are two underlying distance metrics: BRDF space and image space.
-
- For BRDF space, the demo use
data/merl/blue-metallic-paint.binaryfrom MERL dataset.
- For BRDF space, the demo use
python src/eval/metrics.py --is_brdf_space 1 --refer_set_size 1 --reference_folder_path "data/merl/" --sample_set_size 1 --sample_folder_path "data/merl/" -
- For image space, the demo use
output/img/folder with rendered images.
- For image space, the demo use
python src/eval/metrics.py --is_brdf_space 0 --refer_set_size 1 --reference_img_path "output/img/" --sample_set_size 1 --sample_img_path "output/img/"To train the NeuMERL from scratch, please download MERL dataset from MERL and put the binary files in the data/merl folder (see details here). Please download the initial model weights and put them in the model folder (see details here).
-
- Create the output folder.
# output folder
mkdir -p output/merl/merl_1/blue-metallic-paint/-
- Train a NeuMERL MLP model from scratch (file_index = {1, 2, ..., 24}).
python src/pytorch/train.py --pytorch_model_type 1 --file_index 1 - [Optionally] train multiple models in a loop.
# For all 24 * 100 materials
bash src/tools/create_folder.shSet File_index = {1, 2, ..., 24} and set from_list = 1,
python src/pytorch/train.py --pytorch_model_type 1 --file_index 1 --from_list 1Remark: Each file contain the filenames of 100 materials , and the total number of materials is 24 * 100. The first 1-6 includes 6 * 100 MERL original materials after color channel permutation, and the rest 7-12, 13-18, 19-24 are the interpolated materials.
-
- Generated the concatenated npy file from the MLP model weights, which is the NeuMERL dataset.
python src/tools/merl_workflow/read_mlp_weight.py --file_index 1Please feel free to contact us if you have any questions or suggestions.
If you found the paper or code useful, please consider citing:
@inproceedings{
M3ashy2026,
author = {Chenliang Zhou and Zheyuan Hu and Alejandro Sztrajman and Yancheng Cai and Yaru Liu and Cengiz Oztireli},
title = {M$^{3}$ashy: Multi-Modal Material Synthesis via Hyperdiffusion},
year = {2026},
booktitle = {Proceedings of the 40th AAAI Conference on Artificial Intelligence},
location = {Singapore},
series = {AAAI'26}
}
Acknowledgement: We are thankful to the references and the open-source community for their valuable contributions (see our paper and repo License for a detailed list of references).

