This repo is the official implementation of "Rethinking Detecting Salient and Camouflaged Objects in Unconstrained Scenes" (ICCV 2025).
Contact: [email protected]; [email protected]
- Please refer to the link: SAM.
- You may need to install Apex using pip.
- Download the datasets and put them in the same folder. To match the folder name in the dataset mappers, you'd better not change the folder names, its structure may be:
DATASET_ROOT/
├── VOC-USC12K
├── ImageSets
├── Segmentation
├── Scene-A.txt
├── Scene-B.txt
├── Scene-C.txt
├── Scene-D.txt
├── train.txt
├── val.txt
├── JPEGImages
├── SegmentationClass
- Download the pre-training weights of SAM ViT-H: sam_vit_h_4b8939.pth.
- Download the pre-trained weights on USC12K: Baidu/ Google.
The visual results of SOTAs on USC12K test set.
- To train our USCNet on single GPU by following command,the trained models will be saved in savePath folder. You can modify datapath if you want to run your own datases.
bash train.sh- To test and evaluate our USCNet on USC12K:
bash test.shOnly chinese PPT now: MIR-Shared PPT.
To watch the video: Bilibili link.
Additional thanks to the following contributors to this project: Huaiyu Chen, Weiyi Cui, Mingxin Yang, Mengzhe Cui, Fei Liu, Yan Xu, Haopeng Fang, and Xiaokai Zhang from the School of Software Engineering, Huazhong University of Science and Technology.
If this helps you, please cite this work:
@inproceedings{zhou2025rethinking,
title={Rethinking Detecting Salient and Camouflaged Objects in Unconstrained Scenes},
author={Zhou, Zhangjun and Li, Yiping and Zhong, Chunlin and Huang, Jianuo and Pei, Jialun and Li, Hua and Tang, He},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={22372--22382},
year={2025}
}
}
