Skip to content

MPMAvatar: Learning 3D Gaussian Avatars with Accurate and Robust Physics-Based Dynamics (NeurIPS 2025)

Notifications You must be signed in to change notification settings

KAISTChangmin/MPMAvatar

Repository files navigation

MPMAvatar

MPMAvatar: Learning 3D Gaussian Avatars with Accurate and Robust Physics-Based Dynamics (NeurIPS 2025)

Changmin Lee, Jihyun Lee, Tae-Kyun (T-K) Kim

KAIST

[Project Page] [Paper] [Supplementary Video]

teaser

animated1 animated2

we present MPMAvatar, a framework for creating 3D human avatars from multi-view videos that supports highly realistic, robust animation, as well as photorealistic rendering from free viewpoints. For accurate and robust dynamics modeling, our key idea is to use a Material Point Method-based simulator, which we carefully tailor to model garments with complex deformations and contact with the underlying body by incorporating an anisotropic constitutive model and a novel collision handling algorithm. We combine this dynamics modeling scheme with our canonical avatar that can be rendered using 3D Gaussian Splatting with quasi-shadowing, enabling high-fidelity rendering for physically realistic animations. In our experiments, we demonstrate that MPMAvatar significantly outperforms the existing state-of-the-art physics-based avatar in terms of (1) dynamics modeling accuracy, (2) rendering accuracy, and (3) robustness and efficiency. Additionally, we present a novel application in which our avatar generalizes to unseen interactions in a zero-shot manner—which was not achievable with previous learning-based methods due to their limited simulation generalizability.

 

Environment Setup

  1. Clone this repository
 $ git clone https://github.com/KAISTChangmin/MPMAvatar.git
 $ cd MPMAvatar 
  1. Install required apt packages
 $ sudo apt-get update
 $ sudo apt-get install ffmpeg gdebi libgl1-mesa-glx libopencv-dev libsm6 libxrender1 libfontconfig1 libglvnd0 libegl1 libgles2 
  1. Install Blender for Ambient Occlusion map baking
 $ curl -OL https://download.blender.org/release/Blender4.4/blender-4.4.1-linux-x64.tar.xz
 $ tar -xJf ./blender-4.4.1-linux-x64.tar.xz -C ./
 $ rm -f ./blender-4.4.1-linux-x64.tar.xz
 $ mv ./blender-4.4.1-linux-x64 /usr/local/blender
 $ export PATH=/usr/local/blender:$PATH 
  1. Create conda environment and install required pip packages
 $ conda create -n mpmavatar python=3.10
 $ conda activate mpmavatar
 $ pip install torch==2.0.0+cu118 torchvision==0.15.1+cu118 -f https://download.pytorch.org/whl/torch_stable.html
 $ pip install -r ./requirements.txt
 $ pip install git+https://gitlab.inria.fr/bkerbl/simple-knn.git
 $ pip install git+https://github.com/slothfulxtx/diff-gaussian-rasterization.git
 $ FORCE_CUDA=1 pip install git+https://github.com/facebookresearch/pytorch3d.git 

 

Data Preparation

ActorsHQ data

Download ActorsHQ data by filling their request form in this link. We used four sequences: Actor01-Sequence1, Actor02-Sequence1, Actor03-Sequence1, Actor06-Sequence1, following the PhysAvatar's setting.

4D-DRESS data

Download 4D-DRESS data by filling their request form in this link. We used four sequences: 00170_Inner, 00185_Inner, 00190_Inner, 00191_Inner

PhysAvatar data

Download the preprocessed data (e.g. SMPL-X parameters, template meshes, garment part segmentation, uv coordinates) for ActorsHQ dataset by PhysAvatar using their drive link and unpack it in ./data folder.

Additional preprocessed data

Download the additional preprocessed data for ActorsHQ and 4D-DRESS dataset using this drive link and unpack it in ./data folder.

SMPL-X models

Download SMPL-X model and VPoser checkpoint from its official website. You need to download SMPL-X v1.1 (NPZ+PKL, 830 MB) and VPoser v1.0 - CVPR'19 (2.5MB). Unzip the downloaded files and move each body models and VPoser checkpoint (vposer_v1_0/snapshots/TR00_E096.pt) to ./data/body_models.

In the end your ./data folder should look like this:

data
    |-- ActorsHQ
        |-- Actor01
        |-- Actor02
        |-- Actor03
        |-- Actor06
    |-- 4D-DRESS
        |-- 00170_Inner
        |-- 00185_Inner
        |-- 00190_Inner
        |-- 00191_Inner
    |-- body_models
        |-- smplx
            |-- SMPLX_NEUTRAL.npz
            |-- SMPLX_FEMALE.npz
            |-- SMPLX_MALE.npz
        |-- TR00_E096.pt
    |-- a1_s1
    |-- a2_s1
    |-- a3_s1
    |-- a6_s1
    |-- s170_t1
    |-- s185_t1
    |-- s190_t2
    |-- s191_t2
    |-- demo

Demo Data

To help users run the demo without executing the full preprocessing and training pipeline, we provide the necessary intermediate results (e.g., tracked meshes, blend weights, Gaussian point clouds, shadow networks, etc.).
You can download them from this link and place them according to the directory structure described in the issue

 

Preprocessing

 $ cd preprocess
 $ bash ./scripts/actorshq_a1.sh
 $ cd .. 

Training

 $ bash ./scripts/appearance/actorshq_a1.sh
 $ bash ./scripts/physics/actorshq_a1.sh 

Simulation & Rendering

 $ bash ./scripts/sim/actorshq_a1.sh 

Evaluation

 $ bash ./scripts/eval/actorshq_a1.sh 

Run Demo

 $ python run_demo.py --save_name chair_sand --output_dir ./output/demo 

About

MPMAvatar: Learning 3D Gaussian Avatars with Accurate and Robust Physics-Based Dynamics (NeurIPS 2025)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published