Shuhong Liu, Lin Gu, Ziteng Cui, Xuangeng Chu, Tatsuya Harada
git clone [email protected]:ShuhongLL/I2-NeRF.git
Our NeRF framework is built on a PyTorch implementation of ZipNeRF and uses the same environment configuration. For more setup options (nvdiffrast, DPCPP, etc.), please refer to its installation instructions.
conda create --name zipnerf python=3.9
conda activate zipnerf
pip install -r requirements.txt
pip install ./extensions/cuda
We evaluate low-light scenes using the LOM dataset (links). For underwater scenes evaluation, we use the SeaThru-NeRF dataset (links).
We provide direct downloads of the rendered test views produced by our model on the LOM and SeaThru-NeRF datasets (links).
For customized data, we support Colmap, LLFF, and Blender format. Please follow the corresponding data preparation instructions for each format.
We provide ready-to-run configuration files at ./configs/${dataset}/${scene}.gin for the LOM and SeaThru-NeRF datasets. The following are several key configuration options:
Config.enable_absorb=True enables absorption media branch.
Config.enable_scatter=True enables scattering media branch.
Config.enable_spatial_media=True enables spatially varying media density. When set to False, the model uses per-ray homogeneous media density, which is commonly applied in scattering conditions.
Config.enable_bcp=True enables the bright channel prior, which estimates per-pixel illuminance under low-light conditions. When set to True, our dataloader automatically computes the pixel-level bcp value.
Config.enable_depth_prior=True enables pesudo depth label generated by DepthAnythingV2 model. We provide the following script to produce a depth folder in each scene before training, where -s points to the dataset root directory and -n specifies the image folder name (e.g., "images" in the SeaThru-NeRF dataset or "low" in the LOM dataset).
python pred_depth.py -s /path/to/data/root/ -n images
Config.luminance_mean=0.5 sets the target luminance level for restored well-lit scenes from low-light inputs.
Config.contrast_factor=5 sets the contrast enhancement level for restored well-lit scenes from low-light inputs.
We provide a script to automatically compute luminance_mean and contrast_factor from two arbitrary (paired or unpaired) low-light and reference images:
python pred_llhyp.py -s /path/to/lowlight -t /path/to/refTo run the training on LOM dataset:
python train.py \
--gin_configs="configs/LOM/${SCENE}.gin" \
--gin_bindings="Config.data_dir = '${DATA_DIR}'" \
--gin_bindings="Config.exp_name = '${EXPERIMENT}'"or simply:
./scripts/train_lom.shTo run the training on SeaThru-NeRF dataset:
python train.py \
--gin_configs=configs/SeaThru/llff_uw.gin \
--gin_bindings="Config.data_dir = '${DATA_DIR}'" \
--gin_bindings="Config.exp_name = '${EXPERIMENT}'"or simply:
./scripts/train_uw.shThe training process typically takes around 40~60 mins, and uses ~20GB memory on a single GPU. To speed up training, you can also run accelerate launch train.py to enable multi-GPU training. If you encounter GPU OOM, it is recommended to reduce the training batch_size (default 2**14) by specifiying, e.g. Config.batch_size = 1024, in the configuration file.
The evaluation script renders the test views and computes the photometric metrics.
Evaluation on LOM dataset:
./scripts/eval_lom.shEvaluation on SeaThru-NeRF dataset:
./scripts/eval_uw.shTo individually render the test views, you can run
python render.py --gin_configs=/path/to/exp/configwhere gin_configs specifies the path to config.gin in the generated target experiment directory.
This work builds upon the following repositories: multinerf, zipnerf-pytorch, Aleth-NeRF, SeaThru-NeRF. We thank the authors for making their code publicly available.
If you find our work useful, please consider citing our paper as:
@inproceedings{liu2025i2nerf,
title = {I2-NeRF: Learning Neural Radiance Fields Under Physically-Grounded Media Interactions},
author = {Liu, Shuhong and Gu, Lin and Cui, Ziteng and Chu, Xuangeng and Harada, Tatsuya},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2025},
}
