In this guide we will demonstrate how you can use Kangas to evaluate models from the Hugging Face hub.
We'll to use the OWL-ViT object detection model to make predictions on the on 500 examples from the validation split of the fashionpedia 4 categories dataset.
The dataset has 4 classes. ['accessories', 'clothing', 'bags', 'shoes'].
For each image, we'll log bounding boxes and scores for the individual classes in the dataset. Learn more about annotating images with Kangas here
pip install -r requirements.txtpython detect.pykangas serverLet's take a look at some of the things we can do with Kangas.
Here we're going to filter for example images that contain accessories that were not detected by the model.
To do this, we'll run the following query in the filter section.
{"accessories"} != 0 and {"score_accessories"} == 0
object-detection-filtering.mov
Next, let's sort the images based on their mAP score. This lets us find some examples images where our model is doing well and where it is having difficulty.
object-detection-sorting.mov
It is likley that our model is detecting some objects well and missing others. We want to see if the mAP score per image is being affected by the presence of an object in that image.
object-detect-groupby.mov
Here we see that the average mAP score is much better in images that contain shoes.
We have logged annotations and labels as metadata in our Images. Kangas lets you filter examples based on this metadata.
Let's take a look at images that contain bags and shoes. Simply add the following line to the filter section.
"gt_bags" in {"Image"}.labels.keys() and "gt_shoes" in {"Image"}.labels.keys()