SimHMR is a simple and effective framework for 3D human mesh recovery from single images. It combines the power of transformer architectures with SMPL body model to achieve state-of-the-art performance on various benchmarks.
- Python 3.8
- CUDA 10.2+ (for GPU training)
- Conda (recommended for environment management)
- Clone the repository:
git clone https://github.com/Inso-13/simhmr.git
cd simhmr- Create and activate conda environment:
conda env create -f env.yml
conda activate human- Install SimHMR:
pip install -e .- Clone the repository:
git clone https://github.com//Inso-13/simhmr.git
cd simhmr- Install PyTorch (CUDA 10.2):
pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html- Install other dependencies:
pip install -r requirements.txt- Install SimHMR:
pip install -e .- PyTorch: 1.8.0
- MMCV: 1.5.3
- MMDetection: 2.27.0
- MMPose: 0.28.1
- SMPL-X: 0.1.28
- PyTorch3D: 0.7.2
- OpenCV: 4.7.0.68
- Prepare your dataset and update the configuration file
- Start training:
python tools/train.py configs/simhmr/pw3d.pypython tools/test.py configs/simhmr/pw3d.py work_dirs/checkpoint.pth --eval mpjpe pa-mpjpePlease organise all datasets under the data/ directory following the MMHuman3D format. See MMHuman3D documentation for details.
If you find this work useful, please cite:
@inproceedings{huang2023simhmr,
title={Simhmr: A simple query-based framework for parameterized human mesh reconstruction},
author={Huang, Zihao and Shi, Min and Liu, Chengxin and Xian, Ke and Cao, Zhiguo},
booktitle={Proceedings of the 31st ACM International Conference on Multimedia},
pages={6918--6927},
year={2023}
}This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- This work builds upon MMHuman3D
- Thanks to the SMPL model authors
- Thanks to all the dataset providers
We welcome contributions! Please see our contributing guidelines for details.