Skip to content

Duisterhof/rayst3r

Repository files navigation

RaySt3R: Predicting Novel Depth Maps for Zero-Shot Object Completion

NeurIPS 2025

arXiv Project Page
Method overview

📚 Citation

@misc{rayst3r,
          title={RaySt3R: Predicting Novel Depth Maps for Zero-Shot Object Completion}, 
          author={Bardienus P. Duisterhof and Jan Oberst and Bowen Wen and Stan Birchfield and Deva Ramanan and Jeffrey Ichnowski},
          year={2025},
          eprint={2506.05285},
          archivePrefix={arXiv},
          primaryClass={cs.CV},
          url={https://arxiv.org/abs/2506.05285}, 
    }

✅ TO-DOs

  • Inference code
  • Local gradio demo
  • Huggingface demo
  • Docker
  • Training code
  • Dataset release
  • Eval code
  • ViT-S, No-DINO and Pointmap models

⚙️ Installation

mamba create -n rayst3r python=3.11 cmake=3.14.0
mamba activate rayst3r
mamba install pytorch torchvision pytorch-cuda=12.4 -c pytorch -c nvidia # change to your version of cuda
pip install -r requirements.txt

🚀 Usage

The expected input for RaySt3R is a folder with the following structure:


📁 data_dir/
├── cam2world.pt       # Camera-to-world transformation (PyTorch tensor), 4x4 - eye(4) if not provided
├── depth.png          # Depth image, uint16 with max 10 meters
├── intrinsics.pt      # Camera intrinsics (PyTorch tensor), 3x3 
├── mask.png           # Binary mask image
└── rgb.png            # RGB image

Note the depth image needs to be saved in uint16, normalized to a 0-10 meters range. We provide an example directory in example_scene. Run RaySt3R with:

python3 eval_wrapper/eval.py example_scene/

This writes a colored point cloud back into the input directory.

Optional flags:

--visualize # Spins up a rerun client to visualize predictions and camera posees
--run_octmae # Novel views sampled with the OctMAE parameters (see paper)
--set_conf N # Sets confidence threshold to N 
--n_pred_views # Number of predicted views along each axis in a grid, 5--> 22 views total
--filter_all_masks # Use all masks, point gets rejected if in background for a single mask
--tsdf # Fits TSDF to depth maps

🧪 Gradio app

We also provide a gradio app, which uses MoGe and Rembg to generate 3D from a single image.

Launch it with:

python app.py

🐳 Docker Usage

Build the Docker image:

docker build -t rayst3r:latest .

Run the container with the example scene mounted:

docker run --rm --gpus all -v $(pwd)/example_scene:/data -it rayst3r:latest

This will:

  • Build the container with all dependencies and pre-download the required models
  • Mount your local example_scene directory to /data inside the container
  • Launch an interactive shell in the container

Once inside the container, you can:

Run RaySt3R evaluation:

python3 eval_wrapper/eval.py /data

Launch the Gradio demo with public sharing:

python3 app.py --share

This will produce a gradio link for the demo that you can share with other users. Note that the demo is on the internet in this case, it is generally not a good idea to leave it up for long.

Launch the Gradio demo without sharing:

# Exit the container and run with port mapping:
docker run --rm --gpus all -v $(pwd)/example_scene:/data -p 7860:7860 -it rayst3r:latest
# Then inside the container:
python3 app.py

Training RaySt3R

Download and unzip the RaySt3R dataset on Huggingface. Next, in xps/train_rayst3r.py, replace

data_dirs = ["/YOUR/PATH/TO/rayst3r/dataset"]
with one or more paths where the Octmae and GSO datasets are located. Now by running:

python3 xps/train_rayst3r.py

you will print the command to start training RaySt3R!

🎛️ Parameter Guide

Certain applications may benefit from different hyper parameters, here we provide guidance on how to select them.

🔁 View Sampling

We sample novel views evenly on a cylindrical equal-area projection of the sphere. Customize sampling in sample_poses.py. Use --n_pred_views to reduce the total number of views, making inference faster and reduce overlap and artifacts.

🟢 Confidence Threshold

You can set the confidence threshold with the --set_conf threshold. As shown in the paper, a higher threshold generally improves accuracy, reduces edge bleeding but also affects completeness.

🧼 RaySt3R Masks

On top of what was presented in the paper, we also provide the option to consider all predicted masks for each point. I.e., for any point, if any of the predicted masks classifies them as background the point gets removed. In our limited testing this led to cleaner predictions, but it ocasinally carves out crucial parts of geometry.

🏋️ Training

The RaySt3R training command is provided in train_rayst3r.py, documentation will follow later.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published