Skip to content

shape-move/shape-move-public

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Shape My Moves: Text-Driven Shape-Aware Synthesis of Human Motions

🚩 Accepted at CVPR 2025

Project Page

Ting-Hsuan Liao1,2 Yi Zhou2 Yu Shen2 Chun-Hao Paul Huang2 Saayan Mitra2 Jia-Bin Huang1 Uttaran Bhattacharya2

1University of Maryland, College Park, USA, 2Adobe Research

teaser

This is the implementation of ShapeMove, a framework for generating body-shape-aware human motion from text. ShapeMove combines a quantized VAE with continuous shape conditioning and a pretrained language model to synthesize realistic, shape-aligned motions from natural language descriptions.

⚙️ Environment Setup

conda create -n shapemove python=3.10
conda activate shapemove
pip install torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt

⌛️ Download Data and Base Models

bash scripts/download_models.sh

This step will download our pretrained ShapeMove model trained with the AMASS dataset, the flan-t5-base language model, and the SMPL neutral model for visualization.

📐 Inference Model

bash scripts/demo.sh

The output motion and shape beta will be saved under outputs.

🎞️ Render Motion

Blender setup

Follow the setup steps in TEMOS.

After installing blender and required packages in the python environment of blender, run the following command to ensure installation:

blender --background --version

This should return Blender 2.93.18.

Render Meshes with Blender

# generate mesh with given beta and motion .npy file
python -m utils.mesh --dir [path/to/inference/output/folder]

# generate image from blender (with obj/ply file)
blender --background -noaudio --python utils/blender_render.py -- --mode=video --dir [path/to/mesh/folder]

# gather generated image and make video
python utils/visualization.py --dir [path/to/mesh/folder]

Citations

Shape My Moves (this work)

@article{shapemove,
    author    = {Liao, Ting-Hsuan and Zhou, Yi and Shen, Yu and Huang, Chun-Hao Paul and Mitra, Saayan and Huang, Jia-Bin and Bhattacharya, Uttaran},
    title     = {Shape My Moves: Text-Driven Shape-Aware Synthesis of Human Motions},
    journal   = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2025}
}

SMPL

Please follow the official instructions to cite SMPL.

AMASS

Please follow the official instructions to cite AMASS.

Licenses

Shape My Moves (this work)

MIT License

SMPL

Please follow the official licensing terms for SMPL.

AMASS

Please follow the official licensing terms for AMASS.

Acknowledgments

Some great resources we benefit from: MotionGPT, T2M-GPT and text-to-motion.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •