- [2025-11-20] 🎉 UniFit has been accepted to AAAI 2026!
- [2025-11-20] 🚀 The official repository is created. We will release the code and checkpoints soon.
We are actively preparing the code for release. Please stay tuned!
- Release Paper (arXiv)
- Release Inference Code
- Flux.1 Fill Backbone
- SD 3.5 Medium Backbone
- Release Pretrained Models (Checkpoints)
- UniFit (SD 3.5 Medium Backbone)
- UniFit (Flux.1 Fill Backbone)
- Release Training Codes
- Data processing scripts
- Training scripts for Stage I & II
Image-based virtual try-on (VTON) aims to synthesize photorealistic images of a person wearing specified garments. Despite significant progress, building a universal VTON framework that can flexibly handle diverse and complex tasks remains a major challenge. Recent methods explore multi-task VTON frameworks guided by textual instructions, yet they still face two key limitations: (1) semantic gap between text instructions and reference images, and (2) data scarcity in complex scenarios. To address these challenges, we propose UniFit, a universal VTON framework driven by a Multimodal Large Language Model (MLLM). Specifically, we introduce an MLLM-Guided Semantic Alignment Module (MGSA), which integrates multimodal inputs using an MLLM and a set of learnable queries. By imposing a semantic alignment loss, MGSA captures cross-modal semantic relationships and provides coherent and explicit semantic guidance for the generative process, thereby reducing the semantic gap. Moreover, by devising a two-stage progressive training strategy with a self-synthesis pipeline, UniFit is able to learn complex tasks from limited data. Extensive experiments show that UniFit not only supports a wide range of VTON tasks, including multi-garment and model-to-model try-on, but also achieves state-of-the-art performance.
Coming soon. ...
Coming soon. We will provide weights for both SD3.5 and Flux.1 based models.
| Model | Backbone | Description | Download |
|---|---|---|---|
| UniFit-SD3.5 | SD 3.5 Medium | Balanced speed and quality (Recommended) | Link |
| UniFit-Flux | Flux.1 Fill | Higher fidelity and prompt adherence | Link |
(Example commands - to be updated upon code release)
Our code is modified based on Diffusers. We use Stable Diffusion 3.5 Medium and FLUX.1-Fill-dev as the base model. We adopt Qwen2-VL-2B-Instruct as the MGSA Module. Thanks to all the contributors!
