Yihong Cao1,2*,
Jiaming Zhang3,4*,
Xu Zheng5,6,
Hao Shi7,
Kunyu Peng2,
Hang Liu1,
Kailun Yang1†,
Hui Zhang1†
1Hunan University,
2Hunan Normal University,
3Karlsruhe Institute of Technology,
4ETH Zurich,
5HKUST(GZ),
6INSAIT, Sofia University “St. Kliment Ohridski”,
7Zhejiang University
Panoramic image processing is essential for omni-context perception, yet faces constraints like distortions, perspective occlusions, and limited annotations. Previous unsupervised domain adaptation methods transfer knowledge from labeled pinhole data to unlabeled panoramic images, but they require access to source pinhole data. To address these, we introduce a more practical task, ie, Source-Free Occlusion-Aware Seamless Segmentation (SFOASS), and propose its first solution, called UNconstrained Learning Omni-Context Knowledge (UNLOCK). Specifically, UNLOCK includes two key modules: Omni Pseudo-Labeling Learning and Amodal-Driven Context Learning. While adapting without relying on source data or target labels, this framework enhances models to achieve segmentation with 360° viewpoint coverage and occlusion-aware reasoning. Furthermore, we benchmark the proposed SFOASS task through both real-to-real and synthetic-to-real adaptation settings. Experimental results show that our source-free method achieves performance comparable to source-dependent methods, yielding state-of-the-art scores of 10.9 in mAAP and 11.6 in mAP, along with an absolute improvement of +4.3 in mAPQ over the source-only method.
UNLOCK framework solves the Source-Free Occlusion-Aware Seamless Segmentation (SFOASS), enabling segmentation with 360° viewpoint coverage and occlusion-aware reasoning while adapting without requiring source data and target labels
This work addresses Source-Free Occlusion-Aware Seamless Segmentation (SFOASS) and evaluates the proposed method under two domain adaptation settings. In both cases, the source domains remain unchanged, and we convert the target dataset (BlendPASS) to align with the source label space.
- Source: KITTI-360 APS — a real-world amodal panoptic dataset.
- Target: BlendPASS — a real-world 360° street-view panoptic segmentation dataset.
- Source: AmodalSynthDrive — a synthetic dataset for amodal panoptic segmentation in driving scenes.
- Target: BlendPASS — same as above.
We provide two versions of BlendPASS, each aligned to the respective source domain’s 7-class label space:
| Source Domain | Converted Target Dataset | Download Link |
|---|---|---|
| KITTI-360 APS | BlendPASS (APS-aligned) | Google Drive |
| AmodalSynthDrive | BlendPASS (ASD-aligned) | Google Drive |
We provide two versions of BlendPASS, each aligned to the respective source domain’s 7-class label space. Additionally, we release the preprocessed training labels for both source datasets used in our experiments.
| Dataset | Description | Download Link |
|---|---|---|
| BlendPASS (APS-aligned) | Target dataset aligned to KITTI-360 APS | Google Drive |
| BlendPASS (ASD-aligned) | Target dataset aligned to AmodalSynthDrive | Google Drive |
| KITTI-360 APS Labels | Preprocessed amodal panoptic training labels | Google Drive |
| AmodalSynthDrive Labels | Preprocessed amodal panoptic training labels | Google Drive |
-
Run Generate_pseudolabels_from_sourcemodel.py to generate the results with NumPy format.
-
Run the following conversion scripts:
- Save_OPLL_Instance_Level.py (Line 148)
- Save_OPLL_Semantic.py (Line 132)
- Save_ADCL_Pool.py (Line 129)
-
Run Training_Mix_targetonly_unmaskformer.py (Line 34).
If you have any suggestions or find our work helpful, feel free to contact us
Email: [email protected]
If you find our work useful, please consider citing it:
@InProceedings{Cao_2025_ICCV,
author = {Cao, Yihong and Zhang, Jiaming and Zheng, Xu and Shi, Hao and Peng, Kunyu and Liu, Hang and Yang, Kailun and Zhang, Hui},
title = {Unlocking Constraints: Source-Free Occlusion-Aware Seamless Segmentation},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2025},
pages = {8961-8972}
}