Skip to content

Check benchmark submissions for completeness #2433

@stes

Description

@stes

Is your feature request related to a problem? Please describe.

The deeplabcut benchmark scripts currently do not check if a submission is complete, e.g. if predictions for all test images are returned, as noted by @n-poulsen .

Describe the solution you'd like

A good solution would be to add a few lines to this part of the evaluation code, sth along the lines of

predictions = self.get_predictions(name)
self._validate_predictions(name)
...

def _validate_predictions(self, predictions):
  # check if predictions contains too many images not contained in the ground truth
  # check if predictions contains not all images in the ground truth
  # (other potential tests)
  pass

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions