Skip to content

yihong-97/UNLOCK

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 

Repository files navigation

Unlocking Constraints: Source-Free Occlusion-Aware Seamless Segmentation

ICCV 2025

Yihong Cao1,2*, Jiaming Zhang3,4*, Xu Zheng5,6, Hao Shi7, Kunyu Peng2, Hang Liu1, Kailun Yang1†, Hui Zhang1†
1Hunan University, 2Hunan Normal University, 3Karlsruhe Institute of Technology, 4ETH Zurich, 5HKUST(GZ), 6INSAIT, Sofia University “St. Kliment Ohridski”, 7Zhejiang University

Abstract

Panoramic image processing is essential for omni-context perception, yet faces constraints like distortions, perspective occlusions, and limited annotations. Previous unsupervised domain adaptation methods transfer knowledge from labeled pinhole data to unlabeled panoramic images, but they require access to source pinhole data. To address these, we introduce a more practical task, ie, Source-Free Occlusion-Aware Seamless Segmentation (SFOASS), and propose its first solution, called UNconstrained Learning Omni-Context Knowledge (UNLOCK). Specifically, UNLOCK includes two key modules: Omni Pseudo-Labeling Learning and Amodal-Driven Context Learning. While adapting without relying on source data or target labels, this framework enhances models to achieve segmentation with 360° viewpoint coverage and occlusion-aware reasoning. Furthermore, we benchmark the proposed SFOASS task through both real-to-real and synthetic-to-real adaptation settings. Experimental results show that our source-free method achieves performance comparable to source-dependent methods, yielding state-of-the-art scores of 10.9 in mAAP and 11.6 in mAP, along with an absolute improvement of +4.3 in mAPQ over the source-only method.


UNLOCK framework solves the Source-Free Occlusion-Aware Seamless Segmentation (SFOASS), enabling segmentation with 360° viewpoint coverage and occlusion-aware reasoning while adapting without requiring source data and target labels

📚 Datasets

This work addresses Source-Free Occlusion-Aware Seamless Segmentation (SFOASS) and evaluates the proposed method under two domain adaptation settings. In both cases, the source domains remain unchanged, and we convert the target dataset (BlendPASS) to align with the source label space.

1. Real-to-Real Adaptation

  • Source: KITTI-360 APS — a real-world amodal panoptic dataset.
  • Target: BlendPASS — a real-world 360° street-view panoptic segmentation dataset.

2. Synthetic-to-Real Adaptation

  • Source: AmodalSynthDrive — a synthetic dataset for amodal panoptic segmentation in driving scenes.
  • Target: BlendPASS — same as above.

🔗 Download Converted BlendPASS Datasets

We provide two versions of BlendPASS, each aligned to the respective source domain’s 7-class label space:

Source Domain Converted Target Dataset Download Link
KITTI-360 APS BlendPASS (APS-aligned) Google Drive
AmodalSynthDrive BlendPASS (ASD-aligned) Google Drive

🔗 Download Converted BlendPASS Datasets

We provide two versions of BlendPASS, each aligned to the respective source domain’s 7-class label space. Additionally, we release the preprocessed training labels for both source datasets used in our experiments.

Dataset Description Download Link
BlendPASS (APS-aligned) Target dataset aligned to KITTI-360 APS Google Drive
BlendPASS (ASD-aligned) Target dataset aligned to AmodalSynthDrive Google Drive
KITTI-360 APS Labels Preprocessed amodal panoptic training labels Google Drive
AmodalSynthDrive Labels Preprocessed amodal panoptic training labels Google Drive

Training

KITTI-360 APS → BlendPASS

  1. Run Generate_pseudolabels_from_sourcemodel.py to generate the results with NumPy format.

  2. Run the following conversion scripts:

  3. Run Training_Mix_targetonly_unmaskformer.py (Line 34).

Contact

If you have any suggestions or find our work helpful, feel free to contact us

Email: [email protected]

If you find our work useful, please consider citing it:

@InProceedings{Cao_2025_ICCV,
    author    = {Cao, Yihong and Zhang, Jiaming and Zheng, Xu and Shi, Hao and Peng, Kunyu and Liu, Hang and Yang, Kailun and Zhang, Hui},
    title     = {Unlocking Constraints: Source-Free Occlusion-Aware Seamless Segmentation},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2025},
    pages     = {8961-8972}
}

About

[ICCV2025] Unlocking Constraints: Source-Free Occlusion-Aware Seamless Segmentation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages