Zhihang Zhong1,† Xiao Sun1,† Yinqiang Zheng2
Stay tuned. Feel free to contact me for bugs or missing files.
conda create -n baga python==3.8 -y
conda activate baga
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install submodules/diff-gaussian-rasterization/
pip install submodules/simple-knn/
pip install --upgrade https://github.com/unlimblue/KNN_CUDA/releases/download/0.2/KNN_CUDA-0.2-py3-none-any.whl
pip install -r requirement.txt
Register and download SMPL models here and put the downloaded models into the folder assets. Only the neutral one is needed. The folder structure should look like
./
└── assets/
├── SMPL_NEUTRAL.pkl
We contribute synthetic and real datasets for evaluating blur-aware 3DGS human avatar synthesis techniques.
For the synthetic dataset, due to the aggreement of ZJU-MoCap, we cannot re-distribute the sharp data of ZJU-MoCap. So you have to download the original dataset, and follow the following steps to construct the final synthetic dataset using our scripts:
- Download the blurry frames and the calibrations from here and unzip it to
./data/BlurZJU. - Follow the procedure here to download ZJU-MoCap (refined version). Unzip and put the six scenes (
my_377,my_386,my_387,my_392,my_393,my_394) to./data/ZJU-MoCap-Refine(If you get scenes starting withCoreViewinstead ofmy, then you have downloaded the original ZJU-MoCap, not the Refined version). - Run
python rearrange_zju.pyto re-arrange the dataset.
Download the real dataset from this link and unzip them to the ./data directory.
chmod 777 train_BlurZJU.sh
bash train_BlurZJU.sh
chmod 777 train_BSHuman.sh
bash train_BSHuman.sh
We appreciate gaussian-splatting, GauHuman, and GSM for their wonderful work and code implementation. We would also like to deeply express our gratitude to the release of NeuralBody (as well as the ZJU-MoCap dataset) and EasyMocap which we use to calibrate our dataset.