Skip to content

Conversation

@n-poulsen
Copy link
Contributor

@n-poulsen n-poulsen commented Apr 30, 2024

During the computation of benchmark scores, the train/test images are loaded from the assemblies file (predictions made by the user) using the train/test indices in the dataset documentation file. This isn't robust, as the order of images passed by the user might be different to the ones in the ground truth file.

This PR updates the code to use the ground truth training dataset file to obtain the paths of the training/test images to use for evaluation, and find those images in the assemblies file. If there are predictions for test images that are missing, a warning is raised a warning and evaluation is made as if no predictions are made for that image.

The same issue happened when computing OKS in DeepLabCut - missing predictions can lead to a shift in the indices. This has been fixed in deeplabcut/pose_estimation_tensorflow/lib/crossvalutils.py.

@n-poulsen n-poulsen requested a review from jeylau April 30, 2024 14:32
@n-poulsen
Copy link
Contributor Author

Addresses #2433

@n-poulsen n-poulsen requested a review from MMathisLab April 30, 2024 14:34
Copy link
Member

@MMathisLab MMathisLab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lgtm thanks!

@MMathisLab MMathisLab merged commit f848a84 into main Apr 30, 2024
@MMathisLab MMathisLab deleted the niels/evaluation branch June 5, 2024 09:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants