[Project Page] [Paper]
Be aware current version still need some correction and clean-ups. If there are any suggestion for environment setting, let us know for future users
conda create -n anymole python==3.10.0
conda activate anymole
pip install -r requirements.txt
pip install -e anymole-icadapt/EILEV
pip install "git+https://github.com/facebookresearch/pytorch3d.git@297020a4b1d7492190cb4a909cafbd2c81a12cb5"
We tested on single Nvidia A6000 GPU (48GB VRAM).
AnyMoLe requires fine-tuning Video Diffusion model and training scene-specific joint estimator, thus using lower memory GPU needs modifications such as lower batch size.
pretrained checkpoint from DynamiCrafter checkpoint (Required)
pretrained checkpoint from Eilev checkpoint (Recommended, but can be replaced with your text input)
Before start, render frames from 2 sec context motion and target keyframes.
To get data from context and key frames, use visualizer.ipynb by rendering.
❗ When rendering with visualizer.ipynb, ensure the full motion is within the camera view and sufficiently large to be clearly noticeable.
If your motion data does not match with our data setting, refer to data/generate_from_fbx.py and data/refine_for_anymole_if_needed.py
bash run_example.sh {character} {motion}
ex) bash run_example.sh Amy Amy_careful_Walking_input
Amy_Walk_0_resized.mp4
- Finetune video model (ICAdapt) and inference
- Train Scene-specific joint estimator
- Do Motion video mimicking
Here, Motion video mimicking is a new terminology for adopting character motion from video generation model.
python run_video_inference.py --frames_dir ../anymole-render/images/{Character}/{Motion} --text_input --ICAdapt --interp --onlykey 750 --stage 2
python pose3d_train.py --char_name {Character} --motion_name {Motion}
python motion_video_mimicking.py --char_name {Character} --motion_path {Motion} --kpt_pjt
@article{yun2025anymole,
title={AnyMoLe: Any Character Motion In-betweening Leveraging Video Diffusion Models},
author={Yun, Kwan and Hong, Seokhyeon and Kim, Chaelin and Noh, Junyong},
journal={arXiv preprint arXiv:2503.08417},
year={2025}
}
