Skip to content

Official implementation of MAGREF: Masked Guidance for Any-Reference Video Generation with Subject Disentanglement

License

Notifications You must be signed in to change notification settings

MAGREF-Video/MAGREF

Repository files navigation

MAGREF: Masked Guidance for Any-Reference Video Generation with Subject Disentanglement

Yufan Deng, Yuanyang Yin, Xun Guo, Yizhi Wang, Jacob Zhiyuan Fang,
Shenghai Yuan, Yiding Yang, Angtian Wang, Bo Liu, Haibin Huang, Chongyang Ma


Intelligent Creation Team, ByteDance

teaser

🔥 News

  • [2025.10.10] 🔥 Our Research Paper of MAGREF is now available. The Project Page of MAGREF is created.

  • [2025.06.20] 🙏 Thanks to Kijai for developing the ComfyUI nodes for MAGREF and FP8-quantized Hugging Face mode! Feel free to try them out and add MAGREF to your workflow.

  • [2025.06.18] 🔥 In progress. We are actively collecting and processing more diverse datasets and scaling up training with increased computational resources to further improve resolution, temporal consistency, and generation quality. Stay turned!

  • [2025.06.16] 🔥 MAGREF is coming! The inference codes and checkpoint have been released.

🎥 Demo

magref_video.mp4

📑 Todo List

  • Inference codes of MAGREF-480P
  • Checkpoint of MAGREF-480P
  • Checkpoint of MAGREF-14B Pro
  • Training codes of MAGREF

✨ Community Works

ComfyUI

Thanks for Kijai develop the ComfyUI nodes for MAGREF: https://github.com/kijai/ComfyUI-WanVideoWrapper

FP8 quant Huggingface Mode: https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1-Wan-I2V-MAGREF-14B_fp8_e4m3fn.safetensors

Guideline

Guideline by Benji: https://www.youtube.com/watch?v=rwnh2Nnqje4

⚙️ Requirements and Installation

We recommend the requirements as follows.

Environment

# 0. Clone the repo
git clone https://github.com/MAGREF-Video/MAGREF.git
cd MAGREF

# 1. Create conda environment
conda create -n magref python=3.11.2
conda activate magref

# 2. Install PyTorch and other dependencies
# CUDA 12.1
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu121
# CUDA 12.4
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124

# 3. Install pip dependencies
pip install -r requirements.txt

# 4. (Optional) Install xfuser for multiple GPUs inference
pip install "xfuser>=0.4.1"

Download MAGREF Checkpoint

# if you are in china mainland, run this first: export HF_ENDPOINT=https://hf-mirror.com
# pip install -U "huggingface_hub[cli]"
huggingface-cli download MAGREF-Video/MAGREF --local-dir ./ckpts/magref

🤗 Quick Start

  • Single-GPU inference
    Tested on a single NVIDIA H100 GPU. The inference consumes around 70 GB of VRAM, so an 80 GB GPU is recommended.
# way 1
bash infer_single_gpu.sh

# way 2
python generate.py \
    --ckpt_dir ./ckpts/magref \
    --save_dir ./samples \
    --prompt_path ./assets/single_id.txt \
  • Multi-GPU inference
# way 1
bash infer_multi_gpu.sh

# way 2
torchrun --nproc_per_node=8 generate.py \
    --dit_fsdp --t5_fsdp --ulysses_size 8 \
    --ckpt_dir ./ckpts/magref \
    --save_dir ./samples \
    --prompt_path ./assets/multi_id.txt \

💡Note:

  • To achieve the best generation results, we recommend that you describe the visual content of the reference image as accurately as possible when writing text prompt.
  • When the generated video is unsatisfactory, the most straightforward solution is to try changing the --base_seed and modifying the description in the prompt.

👍 Acknowledgement

📧 Ethics Concerns

The images used in these demos are sourced from public domains or generated by models, and are intended solely to showcase the capabilities of this research. If you have any concerns, please contact us at [email protected], and we will promptly remove them.

✏️ Citation

If you find our paper and code useful in your research, please consider giving a star ⭐ and citation 📝.

BibTeX

@misc{deng2025magrefmaskedguidanceanyreference,
      title={MAGREF: Masked Guidance for Any-Reference Video Generation with Subject Disentanglement}, 
      author={Yufan Deng and Yuanyang Yin and Xun Guo and Yizhi Wang and Jacob Zhiyuan Fang and Shenghai Yuan and Yiding Yang and Angtian Wang and Bo Liu and Haibin Huang and Chongyang Ma},
      year={2025},
      eprint={2505.23742},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2505.23742}, 
}

About

Official implementation of MAGREF: Masked Guidance for Any-Reference Video Generation with Subject Disentanglement

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published