Pytorch implementation of [CVPR 2025] GenVDM:Generating Vector Displacement Maps From a Single Image Yuezhi Yang, Qimin Chen, Vladimir G. Kim, Siddhartha Chaudhuri, Qixing Huang, Zhiqin Chen
If you find our work useful in your research, please consider citing:
@inproceedings{yang2025genvdm,
title={GenVDM: Generating Vector Displacement Maps From a Single Image},
author={Yang, Yuezhi and Chen, Qimin and Kim, Vladimir G and Chaudhuri, Siddhartha and Huang, Qixing and Chen, Zhiqin},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2025},
}
Please install the environment by running:
conda create -n GenVDM python=3.10 -y
conda activate GenVDM
bash install_env.sh
We provide the pre-trained network weights. Please put it in ./checkpoints/example_run directory.
Please see dataset directory for instruction on how to download dataset and how to make your own data
To generate VDM image, put your image in ./input and run:
bash generate.sh <image name> <exp name> <checkpoint name>
For example:
bash generate.sh ear2.png example_run example
Notice that your image has to be in png format RGBA image. The object needs to lie in the center of the image. See example images in ./input as an example
To train the network, please put rendered images in ./dataset/rendering_result
python train.py --base config/example_run.yaml --gpus 0,1,2,3,4,5,6,7 --num_nodes 1
You can download demo.blend and precomputed result from ./demo or directly load *.exr file from outputVDM directory to play around VDM. An example VDM has been loaded for you. We highly recomend to use blender 3.6 to open demo.blend since higher version might cause loading errors.
You can learn VDM related blender instruction from here and here.
We have borrow codes from the following repositories. Many thanks to the authors for sharing their codes.
