Skip to content

GPU not utilized by top down detector #2840

@nattse

Description

@nattse

Is there an existing issue for this?

  • I have searched the existing issues

Bug description

In pytorch_dlc, analyzing videos using a top-down model will not utilize the GPU during the detection phase.

In deeplabcut/pose_estimation_pytorch/apis/analyze_videos.py, the specified device/GPU is stored in model_cfg , It is passed to the detector here:

detector_runner = utils.get_detector_inference_runner( model_config=model_cfg, snapshot_path=detector_snapshot.path, max_individuals=max_num_animals, batch_size=detector_batch_size )

However, utils.get_detector_inference_runner defines device : str | None = None by default and there's no code replacing this default with model_config's device.

In utils.get_inference_runners, this is achieved with
if device is None:
device = resolve_device(model_config)

and so this should be copied into utils.get_detector_inference_runner, otherwise the detection phase runs pretty slowly on CPU only.

Operating System

Windows

DeepLabCut version

'3.0.0rc6'

DeepLabCut mode

multi animal

Device type

GPU

Steps To Reproduce

No response

Relevant log output

Anything else?

No response

Code of Conduct

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions