A comprehensive benchmark suite for skeleton-based action recognition algorithms
📄 Paper • 🌐 Project Website • 📊 Dataset • 💻 Code
A comprehensive benchmark suite for skeleton-based action recognition algorithms, featuring multiple state-of-the-art method implementations. This project aims to provide researchers with a unified evaluation platform for comparing different algorithms on skeleton-based action recognition tasks.
- 🔥 15 State-of-the-Art Algorithms: Integration of the most representative skeleton-based action recognition methods
- 📊 Unified Evaluation Framework: Standardized training, testing, and evaluation pipeline
- 🚀 Efficient Implementation: Optimized code implementations with GPU acceleration support
- 📈 Comprehensive Benchmarking: Support for multiple mainstream datasets
- 🛠️ Easy to Use: Detailed documentation and example code
Anubis-benchmark/
├── 2s-AGCN/
├── BlockGCN/
├── CTR-GCN/
├── Decoupling_GCN/
├── DeGCN/
├── GCN-NAS/
├── HDGCN/
├── Hyperformer/
├── InfoGCN/
├── LAGCN/
├── Motif-stgcn/
├── MS-G3D/
├── ShiftGCN/
├── STGCN/
├── STTFormer/
├── requirements.txt
└── README.md - Operating System: Linux (recommended), Windows, macOS
- Python: 3.6 or higher
- CUDA: 10.2 or higher (recommended for GPU acceleration)
git clone https://github.com/your-username/Anubis-benchmark.git
cd Anubis-benchmark# Using conda
conda create -n anubis python=3.9
conda activate anubis
# Or using virtualenv
python -m venv anubis_env
source anubis_env/bin/activate # Linux/macOS
# or anubis_env\Scripts\activate # Windowspip install -r requirements.txtDownload and process Anubis dataset, available at: 👉 HuggingFace Dataset Link
If you use our dataset, please cite our paper in your work.
# Using DeGCN as an example
cd DeGCN
python main.py --config ./config/anubis/anubis.yamlTo use a custom dataset, you need to:
- Implement data feeder (
feeders/) - Define graph structure (
graph/) - Update configuration files
If you find this benchmark or dataset useful in your research, please consider citing:
@misc{liu2025representationcentricsurveyskeletalaction,
title={Representation-Centric Survey of Skeletal Action Recognition and the ANUBIS Benchmark},
author={Yang Liu and Jiyao Yang and Madhawa Perera and Pan Ji and Dongwoo Kim and Min Xu and Tianyang Wang and Saeed Anwar and Tom Gedeon and Lei Wang and Zhenyue Qin},
year={2025},
eprint={2205.02071},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2205.02071},
}Thanks to all original paper authors for their outstanding contributions to skeleton-based action recognition.
⭐ If you like this project, please give it a star!
