Zeqian Long2*, Mingzhe Zheng1*, Kunyu Feng1*, Xinhua Zhang, Hongyu Liu1, Harry Yang1, Linfeng Zhang3, Qifeng Chen1, Yue Ma1,
1 HKUST, 2 UIUC, 3 Shanghai Jiao Tong University
We propose Follow-Your-Shape, a training-free and mask-free framework that supports precise and controllable editing of object shapes while strictly preserving non-target content. Our method achieves superior editability and visual fidelity, particularly in tasks requiring large-scale shape replacement.
- [2025.8.11] Code released!
- [2025.8.11] Paper released!
The environment of our code is the same as FLUX, you can refer to the official repo of FLUX, or running the following command to construct the environment.
conda create --n FollowYourShape python=3.10
conda activate FollowYourShape
pip install -e ".[all]"
We recommend you to run the experiment on a single A100 GPU.
Gradio demo for image editing will be released soon.
For now, we provide several toy test examples in src/toy_test.
You can either run the provided bash script directly or create your own custom bash scripts.
You can also run the following scripts in the terminal to edit your own image.
cd src
python edit.py --source_prompt [your source image prompt] \
--target_prompt [your editing prompt] \
--guidance 2 \
--source_img_dir [the path of your source image] \
--num_steps 15 --offload \
--front [typically set to 1 or 2] \
--inject [typically set to 3 or 4] \
--name 'flux-dev' --offload \
--output_dir [output path] \
--controlnet_type [specify your controlnet type] \
Please refer to the paper for the rationale and recommended values of the hyperparameters.
If you find our work helpful, please star 🌟 this repo and cite 📑 our paper. Thanks for your support!

