Tested on Ubuntu 20.04, Python 3.8, NVIDIA A6000, CUDA 11.7, and PyTorch 2.0.0. Follow the steps below to set up the environment.
- Clone the repo:
git clone https://github.com/XiaokunSun/Barbie.git
cd Barbie- Create a conda environment:
conda create -n barbie python=3.8 -y
conda activate barbie- Install dependencies:
pip install torch==2.0.0 torchvision==0.15.1 torchaudio==2.0.1
pip install -r requirements.txt
pip install git+https://github.com/openai/CLIP.git
pip install git+https://github.com/ashawkey/envlight.git
pip install git+https://github.com/NVlabs/nvdiffrast.git --no-build-isolation
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
pip install git+https://github.com/KAIR-BAIR/[email protected]
pip install git+https://github.com/ashawkey/cubvh --no-build-isolation
conda install https://anaconda.org/pytorch3d/pytorch3d/0.7.5/download/linux-64/pytorch3d-0.7.5-py38_cu117_pyt200.tar.bz2 # Note: Please ensure the pytorch3d version matches your CUDA and Torch versions
pip install git+https://github.com/bellockk/alphashape.git- Download models:
mkdir ./pretrained_models
bash ./scripts/download_humannorm_models.sh
python ./scripts/download_richdreamer_models.py
cd ./pretrained_models && ln -s ~/.cache/huggingface ./
cd ../- Download other models (eg., SMPLX, Tets) from GoogleDrive. Make sure you have the following models:
Barbie
|-- load
|-- barbie
|-- data_dict.json
|-- overall_data_dict.json
|-- smplx_models
|-- smplx
|-- smplx_cloth_mask.pkl
|-- smplx_face_ears_noeyeballs_idx.npy
|-- SMPLX_NEUTRAL.npz
|-- smplx_watertight.pkl
|-- tets
|-- 256_tets.npz
|-- prompt_library.json
|-- pretrained_models
|-- controlnet-normal-sd1.5
|-- depth-adapted-sd1.5
|-- normal-adapted-sd1.5
|-- normal-aligned-sd1.5
|-- Damo_XR_Lab
|-- Normal-Depth-Diffusion-Model
|-- nd_mv_ema.ckpt
|-- albedo_mv_ema.ckpt
|-- huggingface
|-- hub
|-- models--runwayml--stable-diffusion-v1-5
|-- models--openai--clip-vit-large-patch14
|-- models--stabilityai--stable-diffusion-2-1-base
|-- models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K# Generate naked human
python ./scripts/generate_naked_human.py --dict_path ./load/barbie/data_dict.json --naked_human_exp_root_dir ./outputs/naked_human --naked_human_idx 0:1:1 --gpu_idx 0
# Generate clothed human
python ./scripts/generate_clothed_human.py --dict_path ./load/barbie/data_dict.json --naked_human_exp_root_dir ./outputs/naked_human --clothed_human_exp_root_dir ./outputs/clothed_human --naked_human_idx 0:1:1 --cloth_idx 0:1:1 --gpu_idx 0
# Generate human wearing overall
python ./scripts/generate_naked_human.py --dict_path ./load/barbie/overall_data_dict.json --naked_human_exp_root_dir ./outputs/naked_human --naked_human_idx 0:1:1 --gpu_idx 0
python ./scripts/generate_clothed_overall_human.py --dict_path ./load/barbie/overall_data_dict.json --naked_human_exp_root_dir ./outputs/naked_human --clothed_human_exp_root_dir ./outputs/clothed_overall_human --naked_human_idx 0:1:1 --cloth_idx 0:1:1 --gpu_idx 0# Apparel Combination
python ./scripts/apparel_combination.py
# Apparel Editing
python ./scripts/apparel_editing.py
# Animation
python ./scripts/animation.pyIf you want to customize clothing templates, please see ./scripts/smplx_cloth_mask.py
This repository is based on many amazing research works and open-source projects: ThreeStudio, HumanNorm, RichDreamer, G-Shell, etc. Thanks all the authors for their selfless contributions to the community!
If you find this repository helpful for your work, please consider citing it as follows:
@article{sun2024barbie,
title={Barbie: Text to Barbie-Style 3D Avatars},
author={Sun, Xiaokun and Zhang, Zhenyu and Tai, Ying and Tang, Hao and Yi, Zili and Yang, Jian},
journal={arXiv preprint arXiv:2408.09126},
year={2024}
}