Skip to content

ArminLee/OWOD_Review

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 

Repository files navigation

Open World Object Detection: A Survey

Welcome to the code archive for our review paper: Open World Object Detection: A Survey

Abstract:

Exploring new knowledge is a fundamental human ability that can be mirrored in the development of deep neural networks, especially in the field of object detection. Open world object detection (OWOD) is an emerging area of research that adapts this principle to explore new knowledge. It focuses on recognizing and learning from objects absent from initial training sets, thereby incrementally expanding its knowledge base when new class labels are introduced.

We conclude most existing Open World Object Detection (OWOD) methods in literature and archive their codes in this repository covering essential aspects, including, benchmark datasets, source codes, evaluation results, and a taxonomy of existing methods.

Taxonomy of OWOD methods

Pseudo-labeling-based methods

Pseudo-labeling-based methods adopt the pseudo-labeling technique to select unknown objects during the training process. They usually use a self-defined objectness score to measure whether the selected region contains an object or not. Object proposals with the top-k objectness scores and that do not match with known categories will be pseudo-labeled as unknown objects.

Towards Open World Object Detection

OW-DETR: Open-World Detection Transformer

Fast OWDETR: transformer for open world object detection

Open World DETR: Transformer based Open World Object Detection

CAT: LoCalization and IdentificAtion Cascade Detection Transformer for Open-World Object Detection

Class-agnostic methods

Class-agnostic methods consider known and unknown objects as the same foreground objects. By separating the detection of objects and the identification of each instance, these methods use a class-agnostic object proposer to measure the objectness of proposed regions. As the class-agnostic object proposer is trained to learn the objectness rather than the classifier, no bias from known categories is introduced.

Two-branch Objectness-centric Open World Detection

PROB: Probabilistic Objectness for Open World Object Detection

Addressing the Challenges of Open-World Object Detection

Learning Open-World Object Proposals Without Learning to Classify

Random Boxes Are Open-world Object Detectors

Metric-learning methods

Metric-learning OWOD methods generally treat the classification of unknown instances as a metic-learning process. By projecting the features of instances on an embedding feature space, a bunch of metric-learning techniques can be utilized to classify between known classes, unknown classes, and backgrounds. Most metric-learning methods use a common strategy to extract potential unknown instances and focus on distinguishing between known, unknown, and backgrounds. Some methods even extend to separate different unknown classes without ground truth labels, which is closer to real open-world settings.

Revisiting Open World Object Detection

Open-World Object Detection via Discriminative Class Prototype Learning

UC-OWOD: Unknown-Classified Open World Object Detection

Other methods

Apart from what has been included, there are also other OWOD methods that cannot be classified into any of the categories above.

Class-agnostic Object Detection with Multi-modal Transformer

Unknown-Aware Object Detection: Learning What You Don't Know from Videos in the Wild

Detecting the open-world objects with the help of the Brain

Exploring Orthogonality in Open World Object Detection (OrthogonalDet)

Annealing-based Label-Transfer Learning for Open World Object Detection (ALLOW)

Exploiting Hierarchical Structure Learning with Hyperbolic Distance Enhances Open World Object Detection (Hyp-OW)

Recalling Unknowns Without Losing Precision: An Effective Solution to Large Model-Guided Open World Object Detection (SGROD)

Enhancing Open-World Object Detection with Knowledge Transfer and Class-Awareness Neutralization (KTCN)

Unsupervised Recognition of Unknown Objects for Open-World Object Detection (MEPU)

Enhancing Open-World Object Detection with Foundation Models and Dynamic Learning (FMDL)

Unified Open World and Open Vocabulary Object Detection (OW-OVD)

Efficient Universal Open-World Object Detection (YOLO-UniOW)

Dataset splits & Results

In the task of open-world object detection, two datasets are commonly used in most existing methods, MS-COCO dataset and PASCAL VOC dataset. These datasets are divided into several splits based on two strategies.

First, in the original OWOD task, ORE integrates the MS-COCO dataset with the PASCAL VOC dataset to provide more samples called OWOD split. Specifically, all the classes and the corresponding samples are grouped into a set of non-overlapping tasks ${T_1, \cdots, T_t}$. Classes from the PASCAL VOC dataset are treated as task $T_1$. The other classes are grouped into tasks by semantic drifts.

Most existing state-ot-the-art methods use OWOD split as their evaluation protocol, the results are concluded below:

Task IDs Task 1 Task 2 Task 3 Task 4
Methods U-Recall mAP U-Recall mAP U-Recall mAP mAP
Current
Known
Previously
Known
Current
Known
Both Previously
Known
Current
Known
Both Previously
Known
Current
Known
Both
ORE 4.9 56.0 2.9 52.7 26.0 39.4 3.9 38.2 12.7 29.7 29.6 12.4 25.3
UC-OWOD 50.7 33.1 30.5 31.8 28.8 16.3 24.6 25.6 12.9 23.2
OW-DETR 7.5 59.2 6.2 53.6 33.5 42.9 5.7 38.3 15.8 30.8 31.4 17.1 27.8
Fast-OWDETR 9.2 56.6 8.8 51.3 28.6 39.4 7.8 39.2 15.7 32.2 28.2 11.4 25.0
OCPL 8.3 56.6 7.7 50.7 27.5 39.1 11.9 38.6 14.7 30.7 30.8 14.4 26.7
RE-OWOD 9.1 59.7 9.9 54.1 37.3 45.6 11.4 43.1 24.6 37.6 38.0 28.7 35.7
RandBox 10.6 61.8 6.3 45.3 7.8 39.4 35.4
2B-OCD 12.1 56.4 9.4 51.6 25.3 38.5 11.7 37.2 13.2 29.2 30.0 13.3 25.8
ALLOW 13.6 59.3 10.0 53.2 34.0 45.6 14.3 42.6 26.7 38.0 33.5 21.8 30.6
PROB 19.4 59.5 17.4 55.7 32.2 44.0 19.6 43.0 22.2 36.0 35.7 18.9 31.5
Open World DETR 21.0 59.9 15.7 51.8 36.4 44.1 17.4 38.9 24.7 34.2 32.0 19.7 29.0
Hyp-OW 23.5 59.4 20.6 44.0 26.3 36.8 33.6
CAT 23.7 60.0 19.1 55.5 32.7 44.1 24.4 42.8 18.7 34.8 34.4 16.6 29.9
ORTH 24.6 61.3 26.3 55.5 38.5 47.0 29.1 46.7 30.6 41.3 42.4 24.3 37.9
MEPU-FS 31.6 60.2 30.9 57.3 33.3 44.8 30.1 42.6 21.0 35.4 34.8 19.1 30.9
SGROD 34.3 59.8 32.6 56.0 32.3 44.9 32.7 42.8 22.4 36.0 35.5 18.5 31.2
OW-RCNN 37.7 63.0 39.9 48.8 41.7 45.2 43.0 45.2 31.7 40.7 40.3 28.8 37.4
SKDF 39.0 56.8 36.7 52.3 28.3 40.3 36.1 36.9 16.4 30.1 31.0 14.7 26.9
KTCN 41.5 60.2 38.6 55.8 36.3 46.0 39.7 43.5 22.1 36.4 35.1 16.2 30.4
FMDL 41.6 62.3 38.7 59.2 38.6 47.3 35.6 48.1 32.2 43.2 44.5 26.7 38.3
MAVL 50.1 64.0 49.5 61.6 30.8 46.2 50.9 43.8 22.7 36.8 36.2 20.6 32.3
OW-OVD 50.0 69.4 51.7 69.5 41.7 55.6 50.6 55.5 29.8 47.0 47.0 25.2 41.6
YOLO-UniOW 82.6 73.6 82.6 73.4 73.4 48.4 60.9 81.5 60.9 39.0 53.6 32.0 48.2

In the latest OWOD task, OW-DETR proposed a new strategy by splitting the categories across super-classes, called MS-COCO split. Specifically, object classes are grouped into the same tasks by semantic meanings. For example, trucks and vehicles that belong to different tasks in the combined dataset are grouped into the same super-class task: Animals, Person, Vehicles.

Several methods also reported their evaluation results based on MS-COCO split. The results are shown below:

Task IDs Task 1 Task 2 Task 3 Task 4
Methods U-Recall mAP U-Recall mAP U-Recall mAP mAP
Current
Known
Previously
Known
Current
Known
Both Previously
Known
Current
Known
Both Previously
Known
Current
Known
Both
ORE 1.5 61.4 3.9 56.5 26.1 40.6 3.6 38.7 23.7 33.7 33.6 26.3 31.8
OW-DETR 5.7 71.5 6.2 62.8 27.5 43.8 6.9 45.2 24.9 38.5 38.2 28.1 33.1
PROB 19.4 59.5 17.4 55.7 32.2 44.0 19.6 43.0 22.2 36.0 35.7 18.9 31.5
CAT 24.0 74.2 23.0 67.6 35.5 50.7 24.6 51.2 32.6 45.0 45.4 35.1 42.8
ORTH 24.6 71.6 27.9 64.0 39.9 51.3 31.9 52.1 42.2 48.8 48.7 38.8 46.2
Hyp-OW 23.9 72.7 23.3 50.6 25.4 46.2 44.8
OW-RCNN 23.9 68.9 33.3 49.6 36.7 41.9 40.8 42.3 30.8 38.5 39.4 32.2 37.7
MEPU-FS 37.9 74.3 35.8 68.0 41.9 54.3 35.7 50.2 38.3 46.2 43.7 33.7 41.2
SGROD 48.0 73.2 48.9 64.7 36.7 50.0 47.7 47.4 32.4 42.4 42.5 32.6 40.0
SKDF 60.9 69.4 60.0 63.8 26.9 44.4 58.6 46.2 28.0 40.1 41.8 29.6 38.7
OW-OVD 76.2 78.6 79.8 78.5 61.5 69.6 78.4 69.6 55.1 64.7 64.8 56.3 62.7
YOLO-UniOW 84.5 74.4 83.4 74.4 56.9 65.2 83.0 65.2 52.2 61.0 61.0 52.7 58.9

Citation

If you find this work useful, please cite:

@article{li2024open,
  title={Open world object detection: a survey},
  author={Li, Yiming and Wang, Yi and Wang, Wenqian and Lin, Dan and Li, Bingbing and Yap, Kim-Hui},
  journal={IEEE Transactions on Circuits and Systems for Video Technology},
  year={2024},
  publisher={IEEE}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published