Welcome! 👋#

DeepLabCut™️ is a toolbox for state-of-the-art markerless pose estimation of animals performing various behaviors. As long as you can see (label) what you want to track, you can use this toolbox, as it is animal and object agnostic. Read a short development and application summary below.

Installation: how to install DeepLabCut#

Please click the link above for all the information you need to get started! Please note that currently we support only Python 3.10+ (see conda files for guidance).

Quick start#

Developers Stable Release: very quick start (Python 3.10+ required) to install DeepLabCut with the PyTorch engine

  • [1] Install PyTorch (install and then select the desired CUDA version if you want to use a GPU): pip install torch torchvision. Or as an example for GPU support (please check pytorch docs to get the perfect version for your CUDA):

conda install pytorch cudatoolkit=11.3 -c pytorch
conda install -c conda-forge pytables==3.8.0
  • [3] Finally, install DeepLabCut (with all functions + the GUI):

pip install --pre  "deeplabcut[gui]"

or pip install  --pre "deeplabcut" (headless version with PyTorch)!

To use the TensorFlow (TF) engine (requires Python 3.10; TF up to v2.10 supported on Windows, up to v2.12 on other platforms): you’ll need to run pip install "deeplabcut[gui,tf]" (which includes all functions plus GUIs) or pip install "deeplabcut[tf]" (headless version with PyTorch and TensorFlow). We aim to depreciate the TF part in 2027.

We recommend using our conda file, see here or the deeplabcut-docker package.

Documentation: The DeepLabCut Process#

Our docs walk you through using DeepLabCut, and key API points. For an overview of the toolbox and workflow for project management, see our step-by-step at Nature Protocols paper.

For a deeper understanding and more resources for you to get started with Python and DeepLabCut, please check out our free online course! http://DLCcourse.deeplabcut.org

DEMO the code#

🐭 pose tracking of single animals demo Open in Colab

See more demos here. We provide data and several Jupyter Notebooks: one that walks you through a demo dataset to test your installation, and another Notebook to run DeepLabCut from the beginning on your own data. We also show you how to use the code in Docker, and on Google Colab.

Why use DeepLabCut?#

DeepLabCut continues to be actively maintained and we strive to provide a user-friendly GUI and API for computer vision researchers and life scientists alike. This means we integrate state-of-the-art models and frameworks, while providing our “best-guess” defaults for life scientists. We highly encourage you to read our papers to get a better understanding of what to use and how to modify the models for your setting.

Performance 🔥#

In general, we provide all the tooling for you to train and use custom models with various high-performance backbones. We also provide two foundation pretrained animal models: SuperAnimal-Quadruped, SuperAnimal-TopViewMouse. To gauge their out-of-distribution performance, we provide the following tables.

These models are trained on the SuperAnimal-Quadruped with AP-10K held out for out-of-domain testing and the SuperAnimal-TopViewMouse with DLC-openfield held out for out-of-distribution testing. We provide models that include AP-10K in the API (and GUI). Note, there are many different models to select from in DeepLabCut 3.0. We strongly recommend you check this Guide for more details. This table, and those below, give you a sense of performance in real-world complex in-the-wild and lab mouse data, respectively. This link provides the model weights to reproduce the numbers; but please note, our full models are in our DLClibrary and released in the API.

DLC 3.0 Pose Estimation (Top Down Models)

Model Name

Type

mAP SA-Q on AP-10K

mAP SA-TVM on DLC-OpenField

top_down_resnet_50

Top-Down

54.9

93.5

top_down_resnet_101

Top-Down

55.9

94.1

top_down_hrnet_w32

Top-Down

52.5

92.4

top_down_hrnet_w48

Top-Down

55.3

93.8

rtmpose_s

Top-Down

52.9

92.9

rtmpose_m

Top-Down

55.4

94.8

rtmpose_x

Top-Down

57.6

94.5

The History#

In 2018, we demonstrated the capabilities for trail tracking, reaching in mice and various Drosophila behaviors during egg-laying (see Mathis et al. for details). There is, however, nothing specific that makes the toolbox only applicable to these tasks and/or species. The toolbox has already been successfully applied (by us and others) to rats, humans, various fish species, bacteria, leeches, various robots, cheetahs, mouse whiskers and race horses. DeepLabCut utilized the feature detectors (ResNets + readout layers) of one of the state-of-the-art algorithms for human pose estimation by Insafutdinov et al., called DeeperCut, which inspired the name for our toolbox (see references below). Since this time, the package has changed substantially. The code has been re-tooled and re-factored since 2.1+: We have added faster and higher performance variants with MobileNetV2s, EfficientNets, and our own DLCRNet backbones (see Pretraining boosts out-of-domain robustness for pose estimation and Lauer et al 2022). Additionally, we have improved the inference speed and provided both additional and novel augmentation methods, added real-time, and multi-animal support. In v3.0+ we have changed the backend to support PyTorch. This brings not only an easier installation process for users, but performance gains, developer flexibility, and a lot of new tools! Importantly, the high-level API stays the same, so it will be a seamless transition for users 💜! We currently provide state-of-the-art performance for animal pose estimation and the labs (M. Mathis Lab and A. Mathis Group) have both top journal and computer vision conference papers.

Left: Due to transfer learning it requires little training data for multiple, challenging behaviors (see Mathis et al. 2018 for details). Mid Left: The feature detectors are robust to video compression (see Mathis/Warren for details). Mid Right: It allows 3D pose estimation with a single network and camera (see Mathis/Warren). Right: It allows 3D pose estimation with a single network trained on data from multiple cameras together with standard triangulation methods (see Nath* and Mathis* et al. 2019).

DeepLabCut is embedding in a larger open-source eco-system, providing behavioral tracking for neuroscience, ecology, medical, and technical applications. Moreover, many new tools are being actively developed. See DLC-Utils for some helper code.

Code contributors:#

DLC code was originally developed by Alexander Mathis & Mackenzie Mathis, and was extended in 2.0 with the core dev team consisting of Tanmay Nath (2.0-2.1), Jessy Lauer (2.1-2.4), and Niels Poulsen (2.3-3.0). DeepLabCut is an open-source tool and has benefited from suggestions and edits by many individuals including early contributors: Mert Yuksekgonul, Tom Biasi, Richard Warren, Ronny Eichler, Hao Wu, Federico Claudi, Gary Kane and Jonny Saunders as well as the 100+ contributors. Please see AUTHORS for more details!

🤩 This is an actively developed package and we welcome community development and involvement:

Contributors

Get Assistance & be part of the DLC Community✨:#

🚉 Platform

🎯 Goal

⏱️ Estimated Response Time

📢 Support Squad

GitHub DeepLabCut/Issues

To report bugs and code issues🐛 (we encourage you to search issues first)

2-5 days

DLC Core Dev Team

GitHub DeepLabCut/Contributing

To contribute your expertise and experience🙏💯

2-5 days

DLC Core Dev Team

🚧 GitHub DeepLabCut/Roadmap

To learn more about our journey✈️

N/A

N/A

Image.sc forum
🐭Tag: DeepLabCut

To ask help and support questions 👋

Promptly🔥

The DLC Community

Gitter

To discuss with other users, share ideas and collaborate💡

2-5 days

The DLC Community

BluSky🦋

To keep up with our latest news and updates 📢

2-5 days

DLC Team

Twitter Follow

To keep up with our latest news and updates 📢

2-5 days

DLC Team

The DeepLabCut AI Residency Program

To come and work with us next summer👏

Annually

DLC Team

References & Citations:#

Please see our dedicated page on how to cite DeepLabCut 🙏 and our suggestions for your Methods section!

License:#

This project is primarily licensed under the GNU Lesser General Public License v3.0. Note that the software is provided “as is”, without warranty of any kind, express or implied. If you use the code or data, please cite us! Note, artwork (DeepLabCut logo) and images are copyrighted; please do not take or use these images without written permission.

SuperAnimal models are provided for research use only (non-commercial use).

Major Versions:#

For all versions, please see here.

VERSION 3.0: A whole new experience with PyTorch🔥. While the high-level API remains the same, the backend and developer friendliness have greatly improved, along with performance gains!

VERSION 2.3: Model Zoo SuperAnimals, and a whole new GUI experience.

VERSION 2.2: Multi-animal pose estimation, identification, and tracking with DeepLabCut is supported (as well as single-animal projects).

VERSION 2.0-2.1: This is the Python package of DeepLabCut that was originally released in Oct 2018 with our Nature Protocols paper (preprint here). This package includes graphical user interfaces to label your data, and take you from data set creation to automatic behavioral analysis. It also introduces an active learning framework to efficiently use DeepLabCut on large experimental projects, and data augmentation tools that improve network performance, especially in challenging cases (see panel b).

VERSION 1.0: The initial, Nature Neuroscience version of DeepLabCut can be found in the history of git, or here: DeepLabCut/DeepLabCut

News (and in the news):#

:purple_heart: We released a major update, moving from 2.x –> 3.x with the backend change to PyTorch

:purple_heart: The DeepLabCut Model Zoo launches SuperAnimals, see more here.

:purple_heart: DeepLabCut supports multi-animal pose estimation! maDLC is out of beta/rc mode and beta is deprecated, thanks to the testers out there for feedback! Your labeled data will be backwards compatible, but not all other steps. Please see the new 2.2+ releases for what’s new & how to install it, please see our new paper, Lauer et al 2022, and the new docs on how to use it!

:purple_heart: We support multi-animal re-identification, see Lauer et al 2022.

:purple_heart: We have a real-time package available! http://DLClive.deeplabcut.org