Skip to content

[ICASSP 2025 Oral] The official implementation of paper "TextureDiffusion: Target Prompt Disentangled Editing for Various Texture Transfer"

Notifications You must be signed in to change notification settings

THU-CVML/TextureDiffusion

Repository files navigation

TextureDiffusion: Target Prompt Disentangled
Editing for Various Texture Transfer

Zihan Su, Junhao Zhuang, Chun Yuan†
Tsinghua University

Paper

TexutreDiffusion

Release

  • [09/16] Initial Preview Release 🔥 Coming Soon!
  • [12/19] Official Release of Code 🔥 Available Now!

Contents

🐶 Introduction

Recently, text-guided image editing has achieved significant success. However, existing methods can only apply simple textures like wood or gold when changing the texture of an object. Complex textures such as cloud or fire pose a challenge. This limitation stems from that the target prompt needs to contain both the input image content and <texture>, restricting the texture representation. In this paper, we propose TextureDiffusion, a tuning-free image editing method applied to various texture transfer.

pipeline

qualitative_comparison quantitative_comparison.jpg

💻 Installation

It is recommended to run our code on a Nvidia GPU with a linux system. Currently, it requires around 13 GB GPU memory to run our method.

Clone the repo:

git clone https://github.com/THU-CVML/TextureDiffusion.git
cd TextureDiffusion

To install the required libraries, simply run the following command:

conda create -n TextureDiffusion python=3.8
conda activate TextureDiffusion
pip install -r requirements.txt

🚀 Usage

The notebook main.ipynb provides the editing samples.

Note: Within main.ipynb, you can set parameters such as attention_step, attention_layer, and resnet_step. We mainly conduct expriemnts on Stable Diffusion v1-4, while our method can generalize to other versions (like v1-5).

Dataset: In the quantitative experiments, the dataset is the editing type of changing material on PIE-Bench. We find that some text prompts do not meet the standards for changing material. For example, the source prompt is "the 2020 honda hrx is driving down the road" and the target prompt is "the 2020 honda hrx is driving down the road [full of flowers]".

So we modify prompt, and the modified file is mapping_file_modified.json. You need to use this file to replace mapping_file.json in PIE-Bench to perform quantitative experiments. In addition, we show the modified prompt in modified_prompt.txt.

🙌🏻 Acknowledgement

Our code is based on these awesome repos:

📖 BibTeX

If you find our repo helpful, please consider leaving a star or cite our paper :)

@article{su2025texturediffusion,
  title={TextureDiffusion: Target Prompt Disentangled Editing for Various Texture Transfer},
  author={Su, Zihan and Zhuang, Junhao and Yuan, Chun},
  booktitle={IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2025},
}

About

[ICASSP 2025 Oral] The official implementation of paper "TextureDiffusion: Target Prompt Disentangled Editing for Various Texture Transfer"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published