Skip to content

Official implementation of the SIGGRAPH ASIA 2025 conference paper *Robust Single-shot Structured Light 3D Imaging via Neural Feature Decoding*.

Notifications You must be signed in to change notification settings

Namisntimpot/NSL

Repository files navigation

Neural Structured Light

arXiv Project Page

teaser

This repository provides the official implementation of the SIGGRAPH Asia paper Robust Single-shot Structured Light 3D Imaging via Neural Feature Decoding.

We introduce Neural Structured Light, which moves single-shot structured-light decoding from the fragile pixel domain into the neural feature domain.
Our model — trained entirely on synthetic data — achieves good sim2real generalization, delivering higher accuracy and better completeness on real-world scenes, while offering better robustness against long-standing challenges such as strong specularities, strong ambient illumination, and translucent materials.

Models

  • Monocular structured-light checkpoint
    (input: one structured-light camera image + one projected pattern)

  • Binocular structured-light checkpoint
    (input: left + right structured-light camera images)

  • Binocular + projector structured-light checkpoint
    (input: left + right camera images + projector pattern)
    Note: In practice, this model requires strict trinocular rectification across left–projector–right. All three optical centers must be collinear — something extremely difficult to achieve with off-the-shelf hardware.
    We strongly recommend NOT using this model in real setups until a more practical solution is developed.

    (Click to expand) Additional note To run this model you must also pass the extra forward argument `corr_middle_rate`, representing the ratio between (projector–left) baseline and (left–right) baseline after trinocular rectification.

Place downloaded weights under zoo/ckpts/ or any directory you prefer.

Environment

conda create -n nsl python=3.11
conda activate nsl
pip install -r requirement.txt

Inference Example

See the example inference script: inf.py.
Important: this script does not perform rectification.
You must provide already-rectified images and camera parameters.

Monocular structured light (left_img + pattern)

python inf.py --cfgs cfgs/local/NSL_left-patt.yaml \
    --weight zoo/ckpts/nsl_left-patt.pt \
    --limg example/tearoom/limg_lp.png \
    --patt example/tearoom/patt_lp.png \
    --param example/tearoom/param_lp.npy

Binocular structured light (left_img + right_img)

python inf.py --cfgs cfgs/local/NSL_left-right.yaml \
    --weight zoo/ckpts/nsl_left-right.pt \
    --limg example/tearoom/limg_lr.png \
    --rimg example/tearoom/rimg_lr.png \
    --param example/tearoom/param_lr.npy

param.npy must contain camera parameters in a dictionary, including:

  • L_intri, R_intri, P_intri: 3×3 intrinsics of left/right/projector
  • L_extri, R_extri, P_extri: 4×4 extrinsics of left/right/projector

Missing entries can be filled with any meaningless placeholder matrix if unused.

(Click to expand) Refiner note Although the second-stage refiner works well on images captured with industrial-grade cameras, its performance degrades on low-quality inputs (e.g., RealSense IR camera images). The first stage is sufficiently robust for most practical purposes. If needed, you can skip the refiner by passing `--skip_refine`.

Acknowledgements

This repository is heavily inspired by the following excellent works — we sincerely thank their authors!

License

This work is licensed under CC BY-NC 4.0.

CC BY-NC 4.0

About

Official implementation of the SIGGRAPH ASIA 2025 conference paper *Robust Single-shot Structured Light 3D Imaging via Neural Feature Decoding*.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages