This is the official implementation for Denoising as Adaptation: Noise-Space Domain Adaptation for Image Restoration (arXiv version).
Kang Liao, Zongsheng Yue, Zhouxia Wang, Chen Change Loy
S-Lab, Nanyang Technological University
Existing domain adaptation approaches are mainly developed for high-level vision tasks. However, aligning high-level deep representations in feature space may overlook low-level variations essential for image restoration, while pixel-space approaches often involve computationally intensive adversarial paradigms that can lead to instability during training. In this work, we propose a new noise-space solution that preserves low-level appearance across different domains within a compact and stable framework.
- Our work represents the first attempt at addressing domain adaptation in the noise space for image restoration. We show the unique benefits from diffusion loss in eliminating the gap between the synthetic and real-world data, which cannot be achieved using existing losses.
- To eliminate the shortcut learning in joint training, we design strategies to fool the diffusion model, making it difficult to distinguish between synthetic and real conditions, thereby encouraging both to align consistently with the target clean distribution.
- Our method offers a general and flexible adaptation strategy applicable beyond specific restoration tasks. It requires no prior knowledge of noise distribution or degradation models and is compatible with various restoration networks. The diffusion model is discarded after training, incurring no extra computational cost during restoration inference.
- 2024.10.08: The project page of Noise-DA is online.
- 2024.12.26: Release the code (both training and inference) and pre-trained models.
- 2025.01.23: This paper has been accepted to ICLR 2025.
- Release more pre-trained restoration models of our extended experiments, such as DnCNN, Uformer, SwinIR, Restormer, etc.
- Release Gradio Demo.
The code has been implemented with PyTorch 2.1.2 and CUDA 12.1.
An example of installation commands is provided as follows:
# git clone this repository
git clone https://github.com/KangLiao929/Noise-DA
cd Noise-DA
# create new anaconda env
conda create -n Noise-DA python=3.9
conda activate Noise-DA
# install python dependencies
pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt
-
Download the pretrained models of each image restoration task (e.g., denoising, deraining, and deblurring) to the
checkpointsfolder. -
Customize the paths of the pretrained models
"checkpoint"and degraded images"data_root"in.jsonof theconfigs_demofolder. We also provide some examples of degraded images in theinputsfolder. Run the following scripts for different restoration tasks.
# test the image denoising model
sh test.sh ./configs_demo/denoising.json
# test the image deraining model
sh test.sh ./configs_demo/deraining.json
# test the image deblurring model
sh test.sh ./configs_demo/deblurring.json
The restored results can be found in the results folder. Note that the above restoration networks are built based on the classical and handy U-Net architecture. Better restoration performance can be achieved using more powerful archiectures such as SwinIR and Restormer, and we will release their pretrained models soon.
🌈 Check out more visual results and restoration interactions here.
The instructions of the dataset preparation, training, and evaluation (reproduce our quantitative metrics) for each image restoration task, are provided in their respective directories. Here is a summary table containing hyperlinks for easy navigation:
| Task | Dataset Instructions | Training Instructions | Evaluation Instructions |
|---|---|---|---|
| Image Denoising | Link | Link | Link |
| Image Deraining | Link | Link | Link |
| Image Deblurring | Link | Link | Link |
This project is licensed under NTU S-Lab License 1.0. Redistribution and use should follow this license.
This project is based on Palette, openai/guided-diffusion, and Restormer. Thanks for their awesome works.
If you find our work useful for your research, please consider citing the paper:
@article{liao2024denoising,
title={Denoising as Adaptation: Noise-Space Domain Adaptation for Image Restoration},
author={Liao, Kang and Yue, Zongsheng and Wang, Zhouxia and Loy, Chen Change},
journal={arXiv preprint arXiv:2406.18516},
year={2024}
}For any questions, feel free to email [email protected].
