Skip to content

1231234zhan/InteractRAGA

Repository files navigation

Interactive Rendering of Relightable and Animatable Gaussian Avatars

Installation

  1. Download and install necessary packages.
conda create -n RAGA python=3.10
conda activate RAGA

pip install torch==2.4.1 torchvision numpy<2.0 --index-url https://download.pytorch.org/whl/cu121 
pip install iopath plyfile scipy opencv-python imageio scikit-image tqdm websockets tensorboardX tensorboard yaml pynvjpeg open3d trimesh rtree

pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py310_cu121_pyt241/download.html
pip install git+https://github.com/NVlabs/nvdiffrast.git@main#nvdiffrast

cd submodules/diff-gaussian-rasterization/ && pip install .
  1. Install frnn, a fixed radius nearest neighbor search implementation which is much faster than pytorch3d knn_points.

  2. Download SMPL model, place SMPL_FEMALE.pkl, SMPL_MALE.pkl, SMPL_NEUTRAL.pkl file to ./smpl_model/smpl/. Then follow this link to remove chumpy objects from the model data.

Dataset Preparation

  1. Download SyntheticDataset, ZJUMoCap, or PeopleSnapshot datasets. For ZJUMoCap dataset, we use the refined version from instant-nvr because the mask is cleaner.
  2. For PeopleSnapshot dataset, use script/peoplesnapshot.ipynb to process the dataset.
  3. Prepare the template mesh. We have included the template file in ./template. Place the template file in {DATASET_DIR}/lbs/bigpose_mesh.ply for each dataset case.
    • You can also reconstruct the template mesh by following this link. Note that the obtained mesh needs to be re-meshed to approximately 40K vertices.

The dataset will look like this after preparation.

DATASET_DIR
├── annots.npy
├── images
│   ├── 00
│   │   ├── 000000.jpg
│   │   └── 000001.jpg
│   └── 01
├── lbs
│   └── bigpose_mesh.ply
├── mask
│   ├── 00
│   │   ├── 000000.png
│   │   └── 000001.png
│   └── 01
└── params
    ├── 0.npy
    └── 1.npy

Training

python train.py -s {DATASET_DIR} -m {MODEL_DIR} -c ./config/{DATASET-TYPE}.yaml [--port {PORT}]

It will take about 1 hours on a RTX 3090.

Visualization

To visualize the results during training, open the viewer, set ip, port, and connect

cd viewer
python net_viewer.py 

To visualize a trained model

python visualize.py -m {MODEL_DIR} --ip {IP} --port {PORT}

cd viewer
python net_viewer.py

Please refer to the video for instructions on how to use it and edit the appearance.

viewer

Acknowledgement

We use the rasterizer and some data processing code from Gaussian-Splatting. This project also uses RelightableAvatar for mesh extraction, nvjpeg-python for image encoding and decoding, and frnn for fast KNN calculation. We greatly thank the authors for their wonderful works.

Citation

@article{zhan2024interactive,
  title={Interactive Rendering of Relightable and Animatable Gaussian Avatars},
  author={Zhan, Youyi and Shao, Tianjia and Wang, He and Yang, Yin and Zhou, Kun},
  journal={arXiv preprint arXiv:2407.10707},
  year={2024}
}

About

[TVCG 2025] Interactive Rendering of Relightable and Animatable Gaussian Avatars

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published