🚩 Accepted at CVPR 2025
Ting-Hsuan Liao1,2 Yi Zhou2 Yu Shen2 Chun-Hao Paul Huang2 Saayan Mitra2 Jia-Bin Huang1 Uttaran Bhattacharya2
1University of Maryland, College Park, USA, 2Adobe Research
This is the implementation of ShapeMove, a framework for generating body-shape-aware human motion from text. ShapeMove combines a quantized VAE with continuous shape conditioning and a pretrained language model to synthesize realistic, shape-aligned motions from natural language descriptions.
conda create -n shapemove python=3.10
conda activate shapemove
pip install torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
bash scripts/download_models.sh
This step will download our pretrained ShapeMove model trained with the AMASS dataset, the flan-t5-base language model, and the SMPL neutral model for visualization.
bash scripts/demo.sh
The output motion and shape beta will be saved under outputs.
Follow the setup steps in TEMOS.
After installing blender and required packages in the python environment of blender, run the following command to ensure installation:
blender --background --version
This should return Blender 2.93.18.
# generate mesh with given beta and motion .npy file
python -m utils.mesh --dir [path/to/inference/output/folder]
# generate image from blender (with obj/ply file)
blender --background -noaudio --python utils/blender_render.py -- --mode=video --dir [path/to/mesh/folder]
# gather generated image and make video
python utils/visualization.py --dir [path/to/mesh/folder]
@article{shapemove,
author = {Liao, Ting-Hsuan and Zhou, Yi and Shen, Yu and Huang, Chun-Hao Paul and Mitra, Saayan and Huang, Jia-Bin and Bhattacharya, Uttaran},
title = {Shape My Moves: Text-Driven Shape-Aware Synthesis of Human Motions},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2025}
}Please follow the official instructions to cite SMPL.
Please follow the official instructions to cite AMASS.
Please follow the official licensing terms for SMPL.
Please follow the official licensing terms for AMASS.
Some great resources we benefit from: MotionGPT, T2M-GPT and text-to-motion.
