Skip to content
/ DCDNet Public

【AAAI 2026 Oral】The official implementation of "Divide-and-Conquer Decoupled Network for Cross-Domain Few-Shot Segmentation".

Notifications You must be signed in to change notification settings

rawwap/DCDNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[AAAI 2026 Oral] Divide-and-Conquer Decoupled Network for Cross-Domain Few-Shot Segmentation

The official implementation of "Divide-and-Conquer Decoupled Network for Cross-Domain Few-Shot Segmentation".

More detailed information is in the PAPER.

Authors: Runmin Cong, Anpeng Wang, Bin Wan, Cong Zhang, Xiaofei Zhou, Wei Zhang

Datasets

The following datasets are used for evaluation in CD-FSS:

Source domain:

  • PASCAL VOC2012:

    Download PASCAL VOC2012 devkit (train/val data):

    wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar

    Download PASCAL VOC2012 SDS extended mask annotations from [Google Drive].

Target domains:

For convenience, you can also download the target domain datasets directly from our organized Baidu Netdisk.

Pre-trained ResNet Models

Download pre-trained ResNet models: GoogleDrive or Baidu Netdisk.

Download SSP pre-trained models: GoogleDrive or Baidu Netdisk.

File Organization

    DCDNet/                                             # project file
    ├── dataset/                                        # dataset
    |   ├── VOC2012/                                    # source dataset: pascal voc 2012
    |   |   ├── JPEGImages/
    |   |   └── SegmentationClassAug/
    |   ├── Deepglobe                                   # target dataset: deepglobe
    |   |   ├── 1/                                      # category
    |   |   |   └── test/
    |   |   |       ├── origin/                         # image
    |   |   |       └── groundtruth/                    # mask
    |   |   ├── 2/                                      # category
    |   |   └── ...                                     
    |   ├── ISIC                                        # target dataset: isic
    |   |   ├── ISIC2018_Task1-2_Training_Input/        # image
    |   |   |   ├── 1/                                  # category
    |   |   |   └── ...
    |   |   └── ISIC2018_Task1_Training_GroundTruth/    # mask
    |   |       └── ...
    |   ├── LungSegmentation/                           # target dataset: chest x-ray
    |   |   ├── CXR_png/                                # image
    |   |   └── masks/                                  # mask
    |   └── FSS-1000                                    # target dataset: fss-1000
    |       ├── ab_wheel/                               # category
    |       └── ...
    |
    ├── pretrained/                                     # pretrained resnet models
    |   ├── resnet50.pth
    |   └── Ori_SSP_trained_on_VOC.pth
    |
    └── trained_models/                                 # official trained models
        ├── deepglobe/                                  # target dataset
        └── ...

Environment

Conda environment settings:

conda create -n dcdnet python=3.10
conda activate dcdnet

pip install uv

# Install PyTorch 2.2.0 with CUDA 12.1
uv pip install torch==2.2.0 torchvision==0.17.0 torchaudio --index-url https://download.pytorch.org/whl/cu121

uv pip install -r requirements.txt

Run the code

Here is an example on ISIC dataset:

First, you need to train a model on the source dataset:

python train.py --data-root ./dataset --dataset isic --cuda 0

Then, you need to fine-tuning the trained model on the target dataset:

python finetuning.py  --data-root ./dataset --dataset isic --cuda 0

You can use our trained models for evaluation directly:

python test.py --data-root ./dataset --dataset isic --cuda 0

Please note that the performances may flutuate within a small range because of differnet batch-sizes, seeds, devices, and environments.

Citation

If you use this codebase for your research, please consider citing:

@article{cong2025divide,
  title={Divide-and-Conquer Decoupled Network for Cross-Domain Few-Shot Segmentation},
  author={Cong, Runmin and Wang, Anpeng and Wan, Bin and Zhang, Cong and Zhou, Xiaofei and Zhang, Wei},
  journal={arXiv preprint arXiv:2511.07798},
  year={2025}
}

Acknowledgement

Our codebase is built based on IFA and SSP's official code.

We also thank PATNet and other FSS and CD-FSS works for their great contributions.

Reference

[1] Shuo Lei, Xuchao Zhang, Jianfeng He, Fanglan Chen, Bowen Du, and Chang-Tien Lu. Cross-domain few-shot semantic segmentation. ECCV, 2022.

[2] Jiahao Nie, Yun Xing, Gongjie Zhang, Pei Yan, Aoran Xiao, Yap-Peng Tan, Alex C Kot, Shijian Lu. Cross-Domain Few-Shot Segmentation via Iterative Support-Query Correspondence Mining. CVPR, 2024.

About

【AAAI 2026 Oral】The official implementation of "Divide-and-Conquer Decoupled Network for Cross-Domain Few-Shot Segmentation".

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages