This project provides the code and results for 'Long Range Diffusion for Weakly Camouflaged Object Segmentation'
Authors: Rui Wang, Caijuan Shi, Weixiang Gao, Changyu Duan, Ao Cai, Fei Yu, Yunchao Wei
The training and testing experiments are conducted using PyTorch with a single GeForce RTX 1080Ti GPU of 12 GB Memory.
- Creating a virtual environment :
conda create -n LRDNet python=3.9 - Installing necessary packages:
pip install -r requirements.txt
- Download train datasets S-COD(COD10K-train+CAMO-train): TrainDatasets. "1" stands for foregrounds, "2" for backgrounds, and "0" for unlabeled regions.
- Download test datasets (CAMO-test+COD10K-test-test+NC4K ):TestDatasets
- Download pretrained backbone model:ResNet50, and put it in
./pth
- Modify the dataset path in
config.py. - First Training: run
python train.py, and it generates catalogueexperiments\with logs and weights. - Second Training: run
python train.py --ckpt=last --second_time - You can also change the other config option by modify the
config.py.
- Testing: run
python test.py, and the result maps are inexperiments\save_images\. - We provide LRDNet testing maps and training weights presented in the papers.
- Tools: PySODMetrics A simple and efficient implementation of SOD metrics.
The code is partly based on CRNet.


