-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Description
Is there an existing issue for this?
- I have searched the existing issues
Bug description
In pytorch_dlc, analyzing videos using a top-down model will not utilize the GPU during the detection phase.
In deeplabcut/pose_estimation_pytorch/apis/analyze_videos.py, the specified device/GPU is stored in model_cfg , It is passed to the detector here:
detector_runner = utils.get_detector_inference_runner( model_config=model_cfg, snapshot_path=detector_snapshot.path, max_individuals=max_num_animals, batch_size=detector_batch_size )
However, utils.get_detector_inference_runner defines device : str | None = None by default and there's no code replacing this default with model_config's device.
In utils.get_inference_runners, this is achieved with
if device is None:
device = resolve_device(model_config)
and so this should be copied into utils.get_detector_inference_runner, otherwise the detection phase runs pretty slowly on CPU only.
Operating System
Windows
DeepLabCut version
'3.0.0rc6'
DeepLabCut mode
multi animal
Device type
GPU
Steps To Reproduce
No response
Relevant log output
Anything else?
No response
Code of Conduct
- I agree to follow this project's Code of Conduct