-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Description
Is there an existing issue for this?
- I have searched the existing issues
Bug description
I am using DLC3.0 pytorch version on both Windows11 and WSL2 Ubuntu 22.04 and both encountered the same issue. When I finished model evaluation and analyze video by running scorername = deeplabcut.analyze_videos(config_path, videofile_path, videotype='.mp4'), the Detector and Pose Prediction were finished 100%. However, at the step Loading From None\test_multiDLC_Resnet50_test_multiJul21shuffle1_detector_250_snapshot_200.h5, I got a list of errors. The first is:
FileNotFoundError Traceback (most recent call last)
File ~\.conda\envs\deeplabcut3\Lib\site-packages\deeplabcut\utils\auxfun_multianimal.py:217, in LoadFullMultiAnimalData(dataname)
216 try:
--> 217 with open(data_file, "rb") as handle:
218 data = pickle.load(handle)
FileNotFoundError: [Errno 2] No such file or directory: 'None\\test_multiDLC_Resnet50_test_multiJul21shuffle1_detector_250_snapshot_200_full.pickle'
Besides, though model training can engage GPU and was pretty fast, analyzing video still utilized CPU.
Operating System
operating system Windows11 and WSL2 Ubuntu 22.04
DeepLabCut version
dlc version 3.0 pytorch
DeepLabCut mode
single animal
Device type
gpu Geforce 1070 MaxQ
Steps To Reproduce
Here are the python commands I ran:
deeplabcut.create_multianimaltraining_dataset(config_path, net_type="top_down_resnet_50",)
deeplabcut.train_network(config_path, allow_growth=True, displayiters=1000,saveiters=500, maxiters = 30000)
deeplabcut.evaluate_network(config_path, plotting=True)
videofile_path = 'C:/Users/charl/Downloads/test_multi.mp4'
scorername = deeplabcut.analyze_videos(config_path, videofile_path, videotype='.mp4')
Ran video analyzing python code: scorername = deeplabcut.analyze_videos(config_path, videofile_path, videotype='.mp4')
Detector and pose prediction is finished. At the step Loading From None\test_multiDLC_Resnet50_test_multiJul21shuffle1_detector_250_snapshot_200.h5, the error occurs
Relevant log output
Running Detector
100%|████████████████████████████████████████████████████████████████████████████| 24434/24434 [24:04<00:00, 16.92it/s]
Running Pose Prediction
100%|████████████████████████████████████████████████████████████████████████████| 24434/24434 [32:13<00:00, 12.64it/s]
Saving results in C:\Users\charl\Downloads\test_multiDLC_Resnet50_test_multiJul21shuffle1_detector_250_snapshot_200.h5 and C:\Users\charl\Downloads\test_multiDLC_Resnet50_test_multiJul21shuffle1_detector_250_snapshot_200_full.pickle
Processing... C:\Users\charl\Downloads\test_multi.mp4
Loading From None\test_multiDLC_Resnet50_test_multiJul21shuffle1_detector_250_snapshot_200.h5
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
File ~\.conda\envs\deeplabcut3\Lib\site-packages\deeplabcut\utils\auxfun_multianimal.py:217, in LoadFullMultiAnimalData(dataname)
216 try:
--> 217 with open(data_file, "rb") as handle:
218 data = pickle.load(handle)
FileNotFoundError: [Errno 2] No such file or directory: 'None\\test_multiDLC_Resnet50_test_multiJul21shuffle1_detector_250_snapshot_200_full.pickle'
During handling of the above exception, another exception occurred:
error Traceback (most recent call last)
Cell In[14], line 1
----> 1 scorername = deeplabcut.analyze_videos(config_path, videofile_path, videotype='.mp4')
File ~\.conda\envs\deeplabcut3\Lib\site-packages\deeplabcut\compat.py:856, in analyze_videos(config, videos, videotype, shuffle, trainingsetindex, gputouse, save_as_csv, in_random_order, destfolder, batchsize, cropping, TFGPUinference, dynamic, modelprefix, robust_nframes, allow_growth, use_shelve, auto_track, n_tracks, calibrate, identity_only, use_openvino, engine, **torch_kwargs)
851 if use_shelve:
852 raise NotImplementedError(
853 f"The 'use_shelve' option is not yet implemented with {engine}"
854 )
--> 856 return analyze_videos(
857 config,
858 videos=videos,
859 videotype=videotype,
860 shuffle=shuffle,
861 trainingsetindex=trainingsetindex,
862 save_as_csv=save_as_csv,
863 destfolder=destfolder,
864 batchsize=batchsize,
865 modelprefix=modelprefix,
866 auto_track=auto_track,
867 identity_only=identity_only,
868 overwrite=False,
869 **torch_kwargs,
870 )
872 raise NotImplementedError(f"This function is not implemented for {engine}")
File ~\.conda\envs\deeplabcut3\Lib\site-packages\deeplabcut\pose_estimation_pytorch\apis\analyze_videos.py:375, in analyze_videos(config, videos, videotype, shuffle, trainingsetindex, save_as_csv, snapshot_index, detector_snapshot_index, device, destfolder, batchsize, modelprefix, transform, auto_track, identity_only, overwrite)
366 _save_assemblies(
367 output_path,
368 output_prefix,
(...)
372 with_identity,
373 )
374 if auto_track:
--> 375 convert_detections2tracklets(
376 config=config,
377 videos=str(video),
378 videotype=videotype,
379 shuffle=shuffle,
380 trainingsetindex=trainingsetindex,
381 overwrite=False,
382 identity_only=identity_only,
383 destfolder=str(destfolder),
384 )
385 stitch_tracklets(
386 config,
387 [str(video)],
(...)
391 destfolder=str(destfolder),
392 )
394 print(
395 "The videos are analyzed. Now your research can truly start!\n"
396 "You can create labeled videos with 'create_labeled_video'.\n"
(...)
399 "few representative outlier frames.\n"
400 )
File ~\.conda\envs\deeplabcut3\Lib\site-packages\deeplabcut\pose_estimation_pytorch\apis\convert_detections_to_tracklets.py:123, in convert_detections2tracklets(config, videos, videotype, shuffle, trainingsetindex, overwrite, destfolder, ignore_bodyparts, inferencecfg, modelprefix, greedy, calibrate, window_size, identity_only, track_method)
121 data_filename = output_path / (data_prefix + ".h5")
122 print(f"Loading From {data_filename}")
--> 123 data, metadata = auxfun_multianimal.LoadFullMultiAnimalData(str(data_filename))
124 if track_method == "ellipse":
125 method = "el"
File ~\.conda\envs\deeplabcut3\Lib\site-packages\deeplabcut\utils\auxfun_multianimal.py:220, in LoadFullMultiAnimalData(dataname)
218 data = pickle.load(handle)
219 except (pickle.UnpicklingError, FileNotFoundError):
--> 220 data = shelve.open(data_file, flag="r")
221 with open(data_file.replace("_full.", "_meta."), "rb") as handle:
222 metadata = pickle.load(handle)
File ~\.conda\envs\deeplabcut3\Lib\shelve.py:243, in open(filename, flag, protocol, writeback)
230 def open(filename, flag='c', protocol=None, writeback=False):
231 """Open a persistent dictionary for reading and writing.
232
233 The filename parameter is the base filename for the underlying
(...)
240 See the module's __doc__ string for an overview of the interface.
241 """
--> 243 return DbfilenameShelf(filename, flag, protocol, writeback)
File ~\.conda\envs\deeplabcut3\Lib\shelve.py:227, in DbfilenameShelf.__init__(self, filename, flag, protocol, writeback)
225 def __init__(self, filename, flag='c', protocol=None, writeback=False):
226 import dbm
--> 227 Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback)
File ~\.conda\envs\deeplabcut3\Lib\dbm\__init__.py:85, in open(file, flag, mode)
83 mod = _defaultmod
84 else:
---> 85 raise error[0]("db file doesn't exist; "
86 "use 'c' or 'n' flag to create a new db")
87 elif result == "":
88 # db type cannot be determined
89 raise error[0]("db type could not be determined")
error: db file doesn't exist; use 'c' or 'n' flag to create a new dbAnything else?
No response
Code of Conduct
- I agree to follow this project's Code of Conduct