Siting Zhu*, Guangming Wang*, Hermann Blum, Jiuming Liu, Liang Song, Marc Pollefeys, Hesheng Wang
First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.
You can create an anaconda environment called sni. For linux, you need to install libopenexr-dev before creating the environment.
sudo apt-get install libopenexr-dev
conda env create -f environment.yaml
conda activate sni- Download the data with semantic annotations in google drive and save the data into the
./data/replicafolder. We only provide a subset of Replica dataset. For all Replica data generation, please refer to directorydata_generation. - Download the pretrained segmentation network in google drive and save it into the
./segfolder (unzipseg/facebookresearch_dinov2_main.zip), and you can run SNI-SLAM:
python -W ignore run.py configs/Replica/room1.yamlThe mesh for evaluation is saved as $OUTPUT_FOLDER/mesh/final_mesh_eval_rec_culled.ply
To evaluate the average trajectory error. Run the command below with the corresponding config file:
# An example for room1 of Replica
python src/tools/eval_ate.py configs/Replica/room1.yamlWe follow code for reconstruction evaluation.
For visualizing the results, we recommend to set mesh_freq: 40 in configs/SNI-SLAM.yaml and run SNI-SLAM from scratch.
After SNI-SLAM is trained, run the following command for visualization.
python visualizer.py configs/Replica/room1.yaml --top_view --save_renderingThe result of the visualization will be saved at output/Replica/room1/vis.mp4. The green trajectory indicates the ground truth trajectory, and the red one is the trajectory of SNI-SLAM.
--output $OUTPUT_FOLDERoutput folder (overwrite the output folder in the config file)--top_viewset the camera to top view. Otherwise, the camera is set to the first frame of the sequence--save_renderingsave rendering video tovis.mp4in the output folder--no_gt_trajdo not show ground truth trajectory
If you find our code or paper useful, please consider citing:
@inproceedings{zhu2024sni,
title={Sni-slam: Semantic neural implicit slam},
author={Zhu, Siting and Wang, Guangming and Blum, Hermann and Liu, Jiuming and Song, Liang and Pollefeys, Marc and Wang, Hesheng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={21167--21177},
year={2024}
}