Skip to content

Ray3417/LRDNet

Repository files navigation

Long Range Diffusion for Weakly Camouflaged Object Segmentation

This project provides the code and results for 'Long Range Diffusion for Weakly Camouflaged Object Segmentation'

Authors: Rui Wang, Caijuan Shi, Weixiang Gao, Changyu Duan, Ao Cai, Fei Yu, Yunchao Wei

Network Architecture

image

Preparation

The training and testing experiments are conducted using PyTorch with a single GeForce RTX 1080Ti GPU of 12 GB Memory.

Configuring your environment:

  • Creating a virtual environment : conda create -n LRDNet python=3.9
  • Installing necessary packages: pip install -r requirements.txt

Downloading Training and Testing Sets

  • Download train datasets S-COD(COD10K-train+CAMO-train): TrainDatasets. "1" stands for foregrounds, "2" for backgrounds, and "0" for unlabeled regions.
  • Download test datasets (CAMO-test+COD10K-test-test+NC4K ):TestDatasets

Pretrained Backbone Model

  • Download pretrained backbone model:ResNet50, and put it in ./pth

Training

  • Modify the dataset path in config.py.
  • First Training: run python train.py, and it generates catalogue experiments\ with logs and weights.
  • Second Training: run python train.py --ckpt=last --second_time
  • You can also change the other config option by modify the config.py.

Testing Configuration

  • Testing: run python test.py, and the result maps are in experiments\save_images\.
  • We provide LRDNet testing maps and training weights presented in the papers.

Evaluation

  • Tools: PySODMetrics A simple and efficient implementation of SOD metrics.

Results

image

image

Credit

The code is partly based on CRNet.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages