Offical Pytorch implementation of our paper:
Xiaoxuan Ma* , Yutang Lin* , Yuan Xu , Stephan P. Kaufhold , Jack Terwilliger , Andres Meza , Yixin Zhu , Federico Rossano , Yizhou Wang
Clone this project. NVIDIA GPUs are needed.
git clone https://github.com/ShirleyMaxx/AlphaChimp
cd AlphaChimp
conda create -n alphachimp python=3.8
conda activate alphachimp
conda install ffmpeg
pip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html
pip install -v -e .
pip install openmim
mim install mmcv==2.1.0
pip install pycocotools shapely terminaltables imageio[ffmpeg] lap-
Please follow the instructions to download and preprocess ChimpACT dataset and place it under
AlphaChimp/ ... data/ ChimpACT_processed/ annotations/ action/ ... ... test/ ... train/ ... ... ... -
Download our pretrained checkpoints and place them under
work_dirs/alphachimp. We provide checkpoints with different resolutions.
We support inference on videos containing chimpanzees.
- Put the videos under directory
infer_input. - Specify the number of GPUs using
${NGPU}. - Choose argument
vis_modein [det,act,mix] to visualize detection, action or both. - By default, we save the visualized results in
infer_output. - Run
python -m torch.distributed.launch --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" \ --nproc_per_node=${NGPU} --master_port=22525 tools/inference.py \ configs/alphachimp/alphachimp_infer576.py \ --vis_mode 'mix' \ --gpus ${NGPU}
We support different resolutions, choose ${RES}=256 or 576. Specify the number of GPUs using ${NGPU}. We support DDP training. Run
python -m torch.distributed.launch --nnodes=1 --node_rank=0 --master_addr=127.0.0.1 \
--nproc_per_node=${NGPU} --master_port=25525 tools/train.py \
configs/alphachimp/alphachimp_res${RES}.py
Specify resolution ${RES} and number of GPUs ${NGPU}. We set default model checkpoint for evaluation. Change it by --checkpoint. Run
python -m torch.distributed.launch --nnodes=1 --node_rank=0 --master_addr=127.0.0.1 \
--nproc_per_node=${NGPU} --master_port=25525 tools/test.py \
configs/alphachimp/alphachimp_res${RES}.py \
--checkpoint work_dirs/alphachimp/alphachimp_res${RES}.pth
-
Save detection results. Specify resolution
${RES}and number of GPUs${NGPU}. We set default model checkpoint for evaluation. Change it by--checkpoint. By default, results will be saved tommtracking/track_pkl, controlled by argumentoutput_dir.# conda activate alphachimp python -m torch.distributed.launch --nnodes=1 --node_rank=0 --master_addr=127.0.0.1 \ --nproc_per_node=${NGPU} --master_port=25525 tools/save_tracking.py \ configs/alphachimp/alphachimp_tracking${RES}.py \ --checkpoint work_dirs/alphachimp/alphachimp_res${RES}.pth \ --gpus ${NGPU} -
Download fixed annotation file and place it under
data/ChimpACT_processed/annotations/test_fix.json. -
We evaluate tracking performance in a new environment. Create tracking evaluation environment by
# go inside mmtracking cd mmtracking conda create -n eval_tracking python=3.8 conda activate eval_tracking conda install ffmpeg pip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html pip install --no-deps -r requirements/alphachimp.txt # no worry if mmcv cannot be successfully installed at this step pip install -v -e . pip install openmim mim install mmcv-full==1.7.2 cd TrackEval pip install -v -e . cd .. -
Evaluation on tracking performance. This will load the saved file
mmtracking/track_pkl/summary.pklgenerated in previous step.# conda activate eval_tracking python -m torch.distributed.launch --nnodes=1 --node_rank=0 --master_addr=127.0.0.1 --nproc_per_node=1 --master_port=25525 \ tools/evaluate_tracking.py configs/evaluate_tracking.py
@article{ma2024alphachimp,
title={AlphaChimp: Tracking and Behavior Recognition of Chimpanzees},
author={Ma, Xiaoxuan and Lin, Yutang and Xu, Yuan and Kaufhold, Stephan and Terwilliger, Jack and Meza, Andres and Zhu, Yixin and Rossano, Federico and Wang, Yizhou},
journal={arXiv preprint arXiv:2410.17136},
year={2024}
}This repo is built on the excellent work MMDetection, MMTracking, and MMAction2. Thanks for these great projects.
