This is the official PyTorch implementation of ''LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models''.
conda env create -f environment.yml -n ldm_isp
pip install git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
pip install git+https://github.com/openai/CLIP.git@main#egg=clip- This is an example of training UNet taming modules.
bash train.sh- After training the UNet taming modules, you can utilize the trained model to generate latent representations of RAW files in your dataset.
- Then, you are able to train the Decoder taming modules using these latent representations and their corresponding sRGB GTs (long-exposure sRGB images).
We released our test results with their corresponding GTs. You may directly compare them with your results during your experiments.
- Download the pretrained models, and put it in
pretrained_models/; - (The released pretrained models are re-implementations, so the evaluation scores are slightly better than those reported in the published paper.)
- Put your own RAW files (Bayer Pattern) into ''test_raw_images'' and the sRGB results will be shown in ''results_raw_images''.
- To test:
$ bash test_custom.sh
- This code is based on previous excellent work StableSR.
If you find this repository useful for your research, please cite the following work.
@article{wen2023ldm,
title={LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models},
author={Wen, Qiang and Xing, Yazhou and Rao, Zhefan and Chen, Qifeng},
journal={arXiv preprint arXiv:2312.01027},
year={2023}
}
