Zhizhou Zhong Β· Yicheng Ji Β· Zhe Kong Β· Yiying Liu* Β· Jiarui Wang Β· Jiasun Feng Β· Lupeng Liu Β· Xiangyi Wang Β· Yanjia Li Β· Yuqing She Β· Ying Qin Β· Huan Li Β· Shuiyang Mao Β· Wei Liu Β· Wenhan Luoβ
*Project Leader βCorresponding Author
TL; DR: AnyTalker is an audio-driven framework for generating multi-person talking videos. It features a flexible multi-stream structure to scale identities while ensuring seamless inter-identity interactions.
Video Demos (Generated with the 1.3B model; 14B results here)
| Input Image | Generated Video |
|---|---|
![]() |
weather_en.mp4 |
![]() |
2p-0-en.mp4 |
![]() |
default.mp4 |
π€ Dec 5, 2025: We release the gradio demo.
π Dec 1, 2025: We release the technical report.
π₯ Nov 28, 2025: We release the AnyTalker weights, inference code, and project page.
- Inference code
- 1.3B stage 1 checkpoint (trained exclusively on single-person data)
- Benchmark for evaluate Interactivity
- Technical report
- Gradio demo
- 14B model (coming soon to the Video Rebirth's creation platform)
conda create -n AnyTalker python=3.10
conda activate AnyTalker
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu126
pip install -r requirements.txt
pip install ninja
pip install flash_attn==2.8.1 --no-build-isolation
You need an FFmpeg build with x264 (libx264) support to encode H.264 videos. Depending on your environment, you can install it via one of the following commands:
# Ubuntu / Debian
apt-get install ffmpegor
# CentOS / RHEL
yum install ffmpeg ffmpeg-develor
# Conda (no root required)
conda install -c conda-forge ffmpeg
β οΈ Note: If you install FFmpeg via conda and encounter the errorUnknown encoder 'libx264', or if the following command does not list libx264:ffmpeg -encoders | grep libx264you can install a specific conda-forge build that includes libx264 support:
conda install -c conda-forge ffmpeg=7.1.0Reference: bytedance/LatentSync#60
| Models | Download Link | Notes |
|---|---|---|
| Wan2.1-Fun-V1.1-1.3B-InP | π€ Huggingface | Base model |
| wav2vec2-base | π€ Huggingface | Audio encoder |
| AnyTalker-1.3B | π€ Huggingface | Our weights |
Download models using huggingface-cli:
# curl -LsSf https://hf.co/cli/install.sh | bash
hf download alibaba-pai/Wan2.1-Fun-V1.1-1.3B-InP --local-dir ./checkpoints/Wan2.1-Fun-1.3B-Inp
hf download facebook/wav2vec2-base-960h --local-dir ./checkpoints/wav2vec2-base-960h
hf download zzz66/AnyTalker-1.3B --local-dir ./checkpoints/AnyTalkerThe directory shoube be organized as follows.
checkpoints/
βββ Wan2.1-Fun-V1.1-1.3B-InP
βββ wav2vec2-base-960h
βββ AnyTalker
The provided script currently performs 480p inference on a single GPU and automatically switches between single-person and multi-person generation modes according to the length of the input audio list.
#!/bin/bash
export CUDA_VISIBLE_DEVICES=0
python generate_a2v_batch_multiID.py \
--ckpt_dir="./checkpoints/Wan2.1-Fun-1.3B-Inp" \
--task="a2v-1.3B" \
--size="832*480" \
--batch_gen_json="./input_example/customize_your_input_here.json" \
--batch_output="./outputs" \
--post_trained_checkpoint_path="./checkpoints/AnyTalker/1_3B-single-v1.pth" \
--sample_fps=24 \
--sample_guide_scale=4.5 \
--offload_model=True \
--base_seed=44 \
--dit_config="./checkpoints/AnyTalker/config_af2v_1_3B.json" \
--det_thresh=0.15 \
--mode="pad" \
--use_half=True or
sh infer_a2v_1_3B_batch.sh--offload_model: Whether to offload the model to CPU after each model forward, reducing GPU memory usage.
--det_thresh: detection threshold for the InsightFace model; a lower value improves performance on abstract-style images.
--sample_guide_scale: recommended value is 4.5; applied to both text and audio.
--mode: select "pad" if every audio input track has already been zero-padded to a common length; select "concat" if you instead want the script to chain each speaker's clips together and then zero-pad the non-speaker segments to reach a uniform length.
--use_half: Whether to enable half-precision (FP16) inference for faster acceleration.

Illustration of "pad" mode for audio inputs.
Audio Binding Order:
- Audio inputs are bound to persons based on their positions in the input image, from left to right.
- Person 1 corresponds to the leftmost person, Person 2 to the middle person (if any), and Person 3 to the rightmost person (if any).
You can customize your inputs and parameters via the web interface by running the following command:
python app.pyWe provide the benchmark used in our paper to evaluate Interactivity, including the dataset and the metric computation script.
python -m pip install -U yt-dlpcd ./benchmark
python download.pyThe directory shoube be organized as follows.
benchmark/
βββ audio_left # Audio for left speaker (zero-padded to full length)
βββ audio_right # Audio for right speaker (zero-padded to full length)
βββ speaker_duration.json # Start/end timestamps for each speaker
βββ interact_11.mp4 # Example video
βββ frames # Reference image supplied as the first video frame
# single video
python calculate_interactivity.py --video interact_11.mp4
# entire directory
python calculate_interactivity.py --dir ./your_dirThe script prints the Interactivity score defined in the paper.
Note: generated videos must keep the exact same names listed in speaker_duration.json.
If you find our work useful in your research, please consider citing:
@misc{zhong2025anytalkerscalingmultipersontalking,
title={AnyTalker: Scaling Multi-Person Talking Video Generation with Interactivity Refinement},
author={Zhizhou Zhong and Yicheng Ji and Zhe Kong and Yiying Liu and Jiarui Wang and Jiasun Feng and Lupeng Liu and Xiangyi Wang and Yanjia Li and Yuqing She and Ying Qin and Huan Li and Shuiyang Mao and Wei Liu and Wenhan Luo},
year={2025},
eprint={2511.23475},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2511.23475},
}
The models in this repository are licensed under the Apache 2.0 License. We claim no rights over the your generated contents, granting you the freedom to use them while ensuring that your usage complies with the provisions of this license. You are fully accountable for your use of the models, which must not involve sharing any content that violates applicable laws, causes harm to individuals or groups, disseminates personal information intended for harm, spreads misinformation, or targets vulnerable populations.




