Skip to content

ryanlu2240/DADiff

Repository files navigation

DADiff: Boosting Diffusion Guidance via Learning Degradation Aware Models for Blind Super Resolution (WACV 2025 Oral)

Shao-Hao Lu, Ren Wang, Ching-Chun Huang, Wei-Chen Chiu

Paper | Supplementary | Video

🎉 Accepted to WACV'25 Algorithm Track 🎉

This is the official repository of the paper "Boosting Diffusion Guidance via Learning Degradation Aware Models for Blind Super Resolution".

Overview

Recently, diffusion-based blind super-resolution (SR) methods have shown great ability to generate high-resolution images with abundant high-frequency detail, but the detail is often achieved at the expense of fidelity. Meanwhile, another line of research focusing on rectifying the reverse process of diffusion models (i.e., diffusion guidance), has demonstrated the power to generate high-fidelity results for non-blind SR. However, these methods rely on known degradation kernels, making them difficult to apply to blind SR. To address these issues, we present DADiff in this paper. DADiff incorporates degradation-aware models into the diffusion guidance framework, eliminating the need to know degradation kernels. Additionally, we propose two novel techniques---input perturbation and guidance scalar---to further improve our performance. Extensive experimental results show that our proposed method has superior performance over state-of-the-art methods on blind SR benchmarks.

Results

Our full results can be found here.

Qualitative comparision

Qualitative comparison of 4× upsampling on DIV2K-Val.

Qualitative comparison of 4× upsampling on CelebA-Val.

Quantitative comparison

The experimental results demonstrate that

  1. Compared to DDNM, our method outperforms DDNM in both fidelity and perceptual quality, showing the effectiveness of our degradation-aware models.
  2. Compared to diffusion-based blind super-resolution methods, we also excel in both fidelity and perceptual quality, where the improvements in fidelity should be attributed to applying diffusion guidance.
  3. Compared to MsdiNet, which is exactly our restoration model Gr, our method demonstrates superior perceptual quality, but at the expense of fidelity because MsdiNet is a regression-based method.

Installation

Code

git clone https://github.com/ryanlu2240/DADiff.git

Environment

conda env create -f environment.yml
conda activate DADiff

Pre-Trained Models

To restore general images,

  1. Download our degradation-aware model div2k_x4.
  2. Also download this diffusion model (from guided-diffusion).

To restore human face images,

  1. Download our degradation-aware models celeba_x4 and celeba_x8.
  2. Also download this diffusion model (from SDEdit).

Test Datasets

To reproduce our results in the paper, download our test datasets.

Running

Quick Start

Run the command below to get 4x SR results immediately. The results should be in DADiff/exp/result/.

bash run_celeba_x4.sh

Also check run_celeba_x8.sh and run_div2k_blind_x4.sh.

Command

The command in the shell scripts is organized as:

python main.py --simplified --eta {ETA} --config {diffusion_CONFIG} --dataset celeba --deg_scale {DEGRADATION_SCALE} --alpha {GUIDANCE_SCALAR} --total_step 100 --mode implicit --DDNM_A implicit --DDNM_Ap implicit --posterior_formula DDIM --perturb_y --perturb_A implicit --perturb_Ap implicit --Learning_degradation --IRopt {Degradation-Aware_CONFIG} --image_folder {IMAGE_FOLDER} --path_y {INPUT_PATH} --diffusion_ckpt {DIFFUSION_CKPT} --save_img

The options are defined as:

  • INPUT_PATH: the root folder of input image.
  • ETA: the DDIM hyperparameter.
  • DEGRADATION_SCALE: the scale of degredation.
  • diffusion_CONFIG: the name of the diffusion model config file.
  • GUIDANCE_SCALAR: the proposed guidance scalar.
  • DIFFUSION_CKPT: the path of pretrain diffusion checkpoint.
  • IMAGE_FOLDER: the folder name of the results.
  • Degradation-Aware_CONFIG: the MsdiNet config file, including the degradation-aware model checkpoint setting.

Reference

If you find this work useful to your research, please cite our paper by:

@inproceedings{lu2025dadiff,
  title={Boosting Diffusion Guidance via Learning Degradation-Aware Models for Blind Super Resolution},
  author={Lu, Shao-Hao and Wang, Ren and Huang, Ching-Chun and Chiu, Wei-Chen},
  booktitle={IEEE/CVF Winter Conference on Applications of Computer Vision},
  year={2025}
}

Acknowledgement

This work is built upon the the gaint sholder of DDNM and MsdiNet. Great thanks to them!

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •