Omar Alama
.
Avigyan Bhattacharya
·
Haoyang He
·
Seungchan Kim
Yuheng Qiu
.
Wenshan Wang
·
Cherie Ho
·
Nikhil Keetha
·
Sebastian Scherer
Paper | Project Page | Video | Podcast | Thread
- 🤖 Guide your robot with semantics within and beyond. RayFronts can be easily deployed as part of your robotics stack as it supports ROS2 inputs for mapping and querying and has robust visualizations.
- 🖼️ Stop using slow SAM crops + CLIP pipelines. Use our encoder to get dense language aligned features in one forward pass.
- 🚀 Bootstrap your semantic mapping project. Utilize the modular RayFronts mapping codebase with its supported datasets to build your project (Novel encoding, novel mapping, novel feature fusion...etc.) and get results fast.
- 💬 Reach out or raise an issue if you face any problems !
- [06.16.2025] 🔥🔥🔥 RayFronts has been accepted to IROS25. See you in Hangzhou !!
- [06.12.2025] 🔥🔥 RayFronts has been accepted to RSS25 SemRobs & RoboReps Workshops. See you in Los Angeles !!
- [06.11.2025] 🔥 RayFronts code is released !
- [8.4.2025] Initial public arxiv release.
Table of Contents
For a minimal setup without ROS and without openvdb you can create a python environment with the environment.yml conda specification (Installing it one shot doesn't work usually and you may need to start with a pytorch enabled environment and install the rest of the dependencies with pip). This won't allow you to run the full RayFronts mapping however since it requires OpenVDB.
For a full local installation:
- (Optional) Install ros2-humble in a conda/mamba environment using these instructions
- Install pytorch 2.4 with cuda 12.1
conda install pytorch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 -c pytorch - Install remaining packages in environment.yml
- Clone the patched OpenVDB, build and install in your conda environment.
apt-get install -y libboost-iostreams-dev libtbb-dev libblosc-dev git clone https://github.com/OasisArtisan/openvdb && mkdir openvdb/build && cd openvdb/build cmake -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX \ -DOPENVDB_BUILD_PYTHON_MODULE=ON \ -DOPENVDB_BUILD_PYTHON_UNITTESTS=ON \ -DOPENVDB_PYTHON_WRAP_ALL_GRID_TYPES=ON \ -DUSE_NUMPY=ON \ -Dnanobind_DIR=$CONDA_PREFIX/lib/python3.11/dist-packages/nanobind/cmake .. make -j4 make install - Build CPP extension by running
CMAKE_INSTALL_PREFIX=$CONDA_PREFIX ./compile.sh
Two docker build files are provided. One for desktop, and one for the NVIDIA Jetson platofrm that give you a full installation of RayFronts with ROS2 and OpenVDB.
You can build the image by going to the docker directory then running:
docker build . -f desktop.Dockerfile -t rayfronts:desktop
To run the docker image, an example command is available at the top of each docker file.
- Setup the data source / dataset. Head to the datasets folder to learn more about the available options. Each dataset class documents how to download and structure your data. For now you can download NiceSlam Replica to test.
- Configure RayFronts. RayFronts has many hyperparameters to choose from. Head over to configs to learn more about the different configuration options. For now we will pass in configurations via the command line for simplicity.
- There are many mapping systems to choose from from simple occupancy maps and semantic voxel maps to the full fledged Semantic Ray Frontiers (RayFronts). Head over to mapping to learn more about the different options. For now we will assume you want to run the full fledged RayFronts mapper. Run:
python3 -m rayfronts.mapping_server dataset=niceslam_replica dataset.path="path_to_niceslam_replica" mapping=semantic_ray_frontiers_map mapping.vox_size=0.05 dataset.rgb_resolution=[640,480] dataset.depth_resolution=[640,480] - To add and visualize queries, setup a query file (named "prompts.txt" for e.g) and add a query at each line in the text file (You can add paths to images for image querying). Next, add the following command line options when running RayFronts
querying.text_query_mode=prompts querying.query_file=prompts.txt querying.compute_prob=True querying.period=100. More information can be found about the querying options in the default.yml config file.
If you are interested in using the encoder on its own for zero-shot open-vocabulary semantic segmentation, follow the example at the top of the NARADIO module.
Or run the provided GRADIO app by installing gradio pip install gradio then running:
python scripts/encoder_semseg_app.py encoder=naradio encoder.model_version=radio_v2.5-b
For details on reproducing RayFronts tables, go to experiments.
Configure your evaluation parameters use this as an example. Run:
python scripts/semseg_eval.py --config-dir <path_to_config_dir> --config-name <name_of_config_file_without_.yaml>
Results will populate in the eval_out directory set in the config
Configure your evaluation parameters use this as an example. Run:
python scripts/srchvol_eval.py --config-dir <path_to_config_dir> --config-name <name_of_config_file_without_.yaml>
Results will populate in the eval_out directory set in the config.
Note that AUC values are computed after the initial results are computed. Use summarize_srchvol_eval.py to compute those and any additional derrivative metrics you are interested in.
If you find this repository useful, please consider giving a star and citation:
@misc{alama2025rayfrontsopensetsemanticray,
title={RayFronts: Open-Set Semantic Ray Frontiers for Online Scene Understanding and Exploration},
author={Omar Alama and Avigyan Bhattacharya and Haoyang He and Seungchan Kim and Yuheng Qiu and Wenshan Wang and Cherie Ho and Nikhil Keetha and Sebastian Scherer},
year={2025},
eprint={2504.06994},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2504.06994},
}


