This repository includes official codes for "Neural Camera Simulators (CVPR2021)".
Neural Camera Simulators
Hao Ouyang*, Zifan Shi*, Chenyang Lei, Ka Lung Law, Qifeng Chen (* indicates joint first authors)
HKUST
Clone this repo.
git clone https://github.com/ken-ouyang/neural_image_simulator.git
cd neural_image_simulator/We have tested our code on Ubuntu 18.04 LTS with PyTorch 1.3.0 and CUDA 10.1. Please install dependencies by
conda env create -f environment.ymlWe provide two datasets for training and test: [Nikon] and [Canon]. The data can be preprocessed with the command:
python preprocess/preprocess_nikon.py --input_dir the_directory_of_the_dataset --output_dir the_directory_to_save_the_preprocessed_data --image_size 512
OR
python preprocess/preprocess_canon.py --input_dir the_directory_of_the_dataset --output_dir the_directory_to_save_the_preprocessed_data --image_size 512The preprocessed data can also be downloaded with the link [Nikon] and [Canon]. The preprocessed dataset can be put into the folder ./ProcessedData/Nikon/ or ./ProcessedData/Canon/
The training arguments are specified in a json file. To train the model, run with the following code
python train.py --config config/config_train.jsonThe checkpoints will be saved into ./exp/{exp_name}/.
When training the noise module, set unet_training in the json file to be true. Other times it will be false. aperture is true when training the aperture module while other times it is false.
Download the pretrained demo [checkpoints] and put them under ./exp/demo/. Then, run the command
python demo_simulation.py --config config/config_demo.jsonThe simulated results are available under ./exp/{exp_name}
@inproceedings{ouyang2021neural,
title = {Neural Camera Simulators},
author = {Ouyang, Hao and Shi, Zifan and Lei, Chenyang and Law, Ka Lung and Chen, Qifeng},
booktitle = {CVPR},
year = {2021}
}
Part of the codes benefit from Pytorch-UNet and pyexiftool.