This repository contains a pytorch implementation for the paper: FrugalNeRF: Fast Convergence for Extreme Few-shot Novel View Synthesis without Learned Priors. Our work presents a simple baseline to reconstruct radiance fields in few-shot setting, which achieves fast training process without learned proirs.
Install environment:
conda create -n frugalnerf python=3.8
conda activate frugalnerf
pip install torch torchvision
pip install tqdm scikit-image opencv-python configargparse lpips imageio-ffmpeg kornia lpips tensorboard torchmetrics plyfile pandas timm
pip install torch-efficient-distloss
Please follow the instructions in ViP-NeRF to set up various databases.
The training script is in train.py, to train a FrugalNeRF:
For single scene training:
python train.py --config configs/llff_default_2v.txt --datadir ./data/nerf_llff_data/horns --train_frame_num 20 42 --test_frame_num 0 8 16 24 32 40 48 56For training on entire dataset:
bash scripts/run_llff_2v.shpython train.py --config configs/llff_default_2v.txt --ckpt path/to/your/checkpoint --render_only 1 --render_test 1
You can just simply pass --render_only 1 and --ckpt path/to/your/checkpoint to render images from a pre-trained
checkpoint. You may also need to specify what you want to render, like --render_test 1, --render_train 1 or --render_path 1.
The rendering results are located in your checkpoint folder.
If you find our code or paper helps, please consider citing:
@inproceedings{lin2024frugalnerf,
title={FrugalNeRF: Fast Convergence for Few-shot Novel View Synthesis without Learned Priors},
author={Chin-Yang Lin and Chung-Ho Wu and Chang-Han Yeh and Shih-Han Yen and Cheng Sun and Yu-Lun Liu},
booktitle={CVPR},
year={2025}
}
The code is available under the MIT license and draws from TensoRF, ViP-NeRF, which are also licensed under the MIT license. Licenses for these projects can be found in the licenses/ folder.
