- Create the environment using:
conda env create -f environment.yml
- Our point-based method require CUDA pointnet++ acceleration, follow the setup instructions in the P4Transformer repository: here.
- Full raw dataset: [URL] Comming Soon
- Full processed dataset: [URL] Comming Soon
- Full processed dataset (radar modality): [URL] Comming Soon
- Sample Vis dataset: here
After downloading, organize your folders as follows (we recommend keeping datasets outside the repo):
M4Human-main/ # Main repo folder
mmDataset/ # (Full processed dataset)
MR-Mesh/
rf3dpose_all/
calib.lmdb
image.lmdb
depth.lmdb
radar_pc.lmdb # (RPC)
params.lmdb # (GT params)
indeces.pkl.gz # (dataset split configuration)
... (other .lmdb and .lock files)
cached_data_test_vis/ # (Sample Vis dataset)
rf3dpose_all/
calib.lmdb
image.lmdb
depth.lmdb
radar_pc.lmdb
params.lmdb
indeces.pkl.gz
... (other .lmdb and .lock files)
For example, if we are using "Sample Vis dataset", after downloading "rf3dpose_all.zip" from source, we start from M4Human-main/, perform the following:
cd ..
mkdir cached_data_test_vis
mv rf3dpose_all.zip" cached_data_test_vis/
cd cached_data_test_vis/
unzip rf3dpose_all.zip
- Download SMPL models from official source or URL.
- Place them in models/ with the following structure:
M4Human-main/
models/
smplx/
SMPLX_FEMALE.npz
SMPLX_FEMALE.pkl
SMPLX_MALE.npz
SMPLX_MALE.pkl
SMPLX_NEUTRAL.npz
SMPLX_NEUTRAL.pkl
smplx_npz.zip
version.txt
- We provide
demo.ipynbfor:- Dataset vis demo example: generate modality GIFs and save to
vis_depth/ - Preprocessed radar dataloader tutorial
- Dataset vis demo example: generate modality GIFs and save to
For more details, see comments in the code and notebook.
- To run the benchmark:
torchrun --nproc_per_node=4 main1_multigpu.py
- Select model/config by editing
main1_multigpu.py.
- RGBD support will be released after further organization.