-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
evaluation refactor #2679
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
evaluation refactor #2679
Conversation
…arge COCO datasets
MMathisLab
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, although I did not test
MMathisLab
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but see suggested changes to docstrings
MMathisLab
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe in main docs we need to add information about these new metrics as well @n-poulsen
Evaluation refactor
Improvements to the evaluation code, as well as new tests to ensure that mAP scores match the pycocotools implementation.
Change list:
deeplabcut/core/metricsfolder (as metrics are computed withnumpy)compute_detection_rmseto compute "detection" RMSE, matching the DeepLabCut 2.X implementationValueError: matrix contains invalid numeric entries#2631