Skip to content

nicolasugrinovic/multiphys

Repository files navigation

MultiPhys: Physics-aware 3D Motion

Code repository for the paper: MultiPhys: Multi-Person Physics-aware 3D Motion Estimation

Nicolas Ugrinovic, Boxiao Pan, Georgios Pavlakos, Despoina Paschalidou, Bokui Shen, Jordi Sanchez-Riera, Francesc Moreno-Noguer, Leonidas Guibas,

arXiv Website shields.io

teaser

News

[2024/06] Demo code release!

Installation

This code was tested on Ubuntu 20.04 LTS and requires a CUDA-capable GPU.

  1. First you need to clone the repository:

    git clone https://github.com/nicolasugrinovic/multiphys.git
    cd multiphys
    
  2. Setup the conda environment, run the following command:

    bash install_conda.sh
    We also include the following steps for trouble-shooting. EITHER:
    • Manually install the env and dependencies
         conda create -n multiphys python=3.9 -y
         conda activate multiphys
         # install pytorch using pip, update with appropriate cuda drivers if necessary
         pip install torch==1.13.0 torchvision==0.14.0 --index-url https://download.pytorch.org/whl/cu117
         # uncomment if pip installation isn't working
         # conda install pytorch=1.13.0 torchvision=0.14.0 pytorch-cuda=11.7 -c pytorch -c nvidia -y
         # install remaining requirements
         pip install -r requirements.txt

    OR:

    • Create environment We use PyTorch 1.13.0 with CUDA 11.7. Use env_build.yaml to speed up installation using already-solved dependencies, though it might not be compatible with your CUDA driver.
      conda env create -f env_build.yml
      conda activate multiphys
      
  1. Download and setup mujoco: Mujoco

    wget https://github.com/deepmind/mujoco/releases/download/2.1.0/mujoco210-linux-x86_64.tar.gz
    tar -xzf mujoco210-linux-x86_64.tar.gz
    mkdir ~/.mujoco
    mv mujoco210 ~/.mujoco/
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mujoco210/bin

    If you have any problems with this, please follow the instructions in the EmbodiedPose repo regarding MuJoCo.

  2. Download the data for the demo, this includes the used models:

    bash fetch_demo_data.sh
    Trouble-shooting
    • Download SMPL paramters from SMPL. Put them in the data/smpl folder, unzip them into data/smpl folder. Please download the v1.1.0 version, which contains the neutral humanoid.
    • Download vPoser paramters from SMPL-X. Put them in the data/vposer folder, unzip them into data/vposer folder.
  1. (optional) Our code uses EGL to render MuJoCo simulation results in a headless fashion, so you need to have EGL installed. You MAY need to run the following or similar commands, depending on your system:
    sudo apt-get install libglfw3-dev libgles2-mesa-dev
  2. For evaluation: to run the collision-based penetration metric found in the evaluation scripts, you need to properly install the SDF package. Please follow the instructions found here.

Demo: Generating physically corrected motion.

The data used here, including SLAHMR estimates should have been donwloaded and placed to the correct folders by using the fetch_demo_data.sh script.

Run the demo script. You can use the following command:

EITHER, to generate several sequences:

bash run_demo.sh

OR, to generate one sequence:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/nvidia:/home/nugrinovic/.mujoco/mujoco210/bin;
export MUJOCO_GL='egl';
# generate sequence
# expi sequences
python run.py --cfg tcn_voxel_4_5_chi3d_multi_hum --data sample_data/expi/expi_acro1_p1_phalpBox_all_slaInit_slaCam.pkl --data_name expi --name slahmr_override_loop2 --loops_uhc 2 --filter acro1_around-the-back1_cam20
Trouble-shooting
  • If you have any issues when running mujoco_py for the first time while compiling, take a look at this github issue: mujoco_py issue

This will generate a video with each sample that appear in the paper and in the paper's video. Resuls are saved in the results/scene+/tcn_voxel_4_5_chi3d_multi_hum/results folder. For each dataset this will generate a folder with the results, following the structure:

<dataset-name>
├── slahmr_override_loop2
    ├── <subject-name>
        ├── <action-name>
           ├── <date>
              ├── 1_results_w_2d_p1.mp4
              ├── ...

Evaluation

You first need to genetare the physically corrected motion for each dataset as explained above. Results should be saved in the folder \results\scene+tcn_voxel_4_5_chi3d_multi_hum\DATASET_NAME, for each dataset.

Prepare results

Then, you need to process the results to prepare them for the evaluation scripts. To do so, you need to run the metrics/prepare_pred_results.py script, specifying the dataset name and the experiment name, for example:

python metrics/prepare_pred_results.py --data_name chi3d --exp_name slahmr_override_loop2

This will generate .pkl files with the names of the subjects, for example s02.pkl for CHI3D under the experiment folder. In the case, you want to run the penetration metric with SDF, you need to generate a file that saves the vertices for each sequence. To do this, you need to add the --save_verts option, thus, run the following command.

python metrics/prepare_pred_results.py --data_name chi3d --exp_name slahmr_override_loop2 --save_verts=1

This will generate _verts.pkl files with the names of the subjects, for example s02_verts.pkl for CHI3D under the experiment folder. These files containing vertices are necesary to compute the SDF penetration metric.

You need to either generate both the .pkl and _verts.pkl also for each baseline you want to measure (EmbPose-mp, SLAHMR) or you can download the pre-processed results from here.

Run evaluation

For evaluation, use the script metrics/compute_metrics_all.py. This generate the metrics for each specified dataset and each type of metric, (i.e., pose, physics-based, and penetration (sdf)). Please note that for running the penetration metric based on SDF, you need to install the sdf library. Follow the instructions found here.

To run the evaluation for a given dataset, e.g. CHI3D run the following commands. Please make sure to change all the paths in the scripts to point to your own folders:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/nvidia:/home/nugrinovic/.mujoco/mujoco210/bin;
# pose metrics
python metrics/compute_metrics_all.py --data_name chi3d --metric_type pose_mp
# physics-based metrics
python metrics/compute_metrics_all.py --data_name chi3d --metric_type phys
# inter-person penetration with SDF
python metrics/compute_metrics_all.py --data_name chi3d --metric_type sdf

You can choose any of the 3 datasets: ['chi3d', 'hi4d', 'expi']. NOTE: the metrics/compute_metrics_all.py script is meant to compute the results table from the paper for all experiments and all subjects for each dataset, so in order to generate an output file, you need to generate results for all subjects in the dataset you choose.

Data pre-processing

To generate data in the ./sample_data directory, you need to do the following:

  1. Add two scripts into the SLAHMR repo: third_party/slahmr/run_opt_world.py and third_party/slahmr/run_vis_world.py and then run the commands placed in ./scripts for each subject in each dataset, e.g.:
  bash scripts/camera_world/run_opt_world_chi3d.sh chi3d/train/s02 0 chi3d
  bash scripts/camera_world/run_opt_world_chi3d.sh chi3d/train/s03 0 chi3d
  bash scripts/camera_world/run_opt_world_chi3d.sh chi3d/train/s04 0 chi3d

Note: you need to change the root variable inside these scripts to point to your own SLAHMR repo directory.
This will generate {seq_name}_scene_dict.pkl files in the SLAHMR output folder which is then read by MultiPhys.

If the previous scripts does not work for you, please just run the following command for each video, making sure that you change the data.root and data.seq arguments accordingly:

python run_opt_world.py data=chi3d run_opt=False run_vis=True data.root=$root/videos/chi3d/train/$seq_num data.seq="${video}" data.seq_id=$seq_num
  1. Run the commands from data_preproc.sh for each dataset. This will generate the files directly to the sample_data folder.
  2. Finally you can run the demo code on your processed data as explained above.

NOTE: please replace the paths with your own paths in the code.

TODO List

  • Demo/inference code
  • Data pre-processing code
  • Evaluation

Acknowledgements

Parts of the code are taken or adapted from the following amazing repos:

Citing

If you find this code useful for your research, please consider citing the following paper:

@inproceedings{ugrinovic2024multiphys,
                author={Ugrinovic, Nicolas and Pan, Boxiao and Pavlakos, Georgios and Paschalidou, Despoina and Shen, Bokui and Sanchez-Riera, Jordi and Moreno-Noguer, Francesc and Guibas, Leonidas},
                title={MultiPhys: Multi-Person Physics-aware 3D Motion Estimation},
                booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
                year={2024}
}

About

Code for the paper MultiPhys: Multi-Person Physics-aware 3D Motion Estimation (CVPR 2024)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages