This package would not be possible without the excellent work of the following projects and teams:
- Team: ByteDance Seed Team
- Repository: ByteDance-Seed/Depth-Anything-3
- Paper: Depth Anything 3: A New Foundation for Metric and Relative Depth Estimation
- Project Page: https://depth-anything-3.github.io/
This wrapper integrates the state-of-the-art Depth Anything 3 model for monocular depth estimation. All credit for the model architecture and training goes to the original authors.
This package was inspired by the following excellent ROS2 wrapper implementations:
- Depth Anything V2 ROS2: grupo-avispa/depth_anything_v2_ros2
- Depth Anything ROS2: polatztrk/depth_anything_ros
- TensorRT Optimized Wrapper: scepter914/DepthAnything-ROS
Special thanks to these developers for demonstrating effective patterns for ROS2 integration.
This aims to be a camera-agnostic ROS2 wrapper for Depth Anything 3 (DA3), providing real-time monocular depth estimation from standard RGB images. This package is designed to work seamlessly with any camera publishing standard sensor_msgs/Image messages.
- Camera-Agnostic Design: Works with ANY camera publishing standard ROS2 image topics
- Multiple Model Support: All DA3 variants (Small, Base, Large, Giant, Nested)
- CUDA Acceleration: Optimized for NVIDIA GPUs with automatic CPU fallback
- Multi-Camera Support: Run multiple instances for multi-camera setups
- Real-Time Performance: Optimized for low latency on Jetson Orin AGX
- Production Ready: Comprehensive error handling, logging, and testing
- Docker Support: Pre-configured Docker and Docker Compose files
- Example Images: Sample test images and benchmark scripts included
- Performance Profiling: Built-in benchmarking and profiling tools
- TensorRT Support: Optimization scripts for NVIDIA Jetson platforms
- Post-Processing: Depth map filtering, hole filling, and enhancement
- INT8 Quantization: Model compression for faster inference
- ONNX Export: Deploy to various platforms and runtimes
- Complete Documentation: Sphinx-based API docs with comprehensive tutorials
- CI/CD Ready: GitHub Actions workflow for automated testing and validation
- Docker Testing: Automated Docker image validation suite
- RViz2 Visualization: Pre-configured visualization setup
- Primary: NVIDIA Jetson (JetPack 6.x)
- Compatible: Any system with Ubuntu 22.04, ROS2 Humble, and CUDA 12.x
- ROS2 Distribution: Humble Hawksbill
- Python: 3.10+
You do NOT need to manually clone the ByteDance Depth Anything 3 repository. The installation process handles everything automatically.
1. Python Package (installed via pip in Step 2):
- ByteDance DA3 Python API and inference code
- Installed with:
pip install git+https://github.com/ByteDance-Seed/Depth-Anything-3.git - Pip handles cloning and installation automatically
- One-time setup, no manual git clone needed
2. Pre-Trained Models (downloaded automatically on first run):
- Model weights download from Hugging Face Hub on first use
- Cached in
~/.cache/huggingface/hub/for reuse - Internet connection required for initial download
- Subsequent runs use cached models (no internet needed)
Summary: Install the package once with pip (Step 2), then models download automatically when you first run the node.
For robots or systems without internet access, pre-download models on a connected machine:
# On a machine WITH internet connection:
python3 -c "
from transformers import AutoImageProcessor, AutoModelForDepthEstimation
# Download model (only needs to be done once)
AutoImageProcessor.from_pretrained('depth-anything/DA3-BASE')
AutoModelForDepthEstimation.from_pretrained('depth-anything/DA3-BASE')
print('Model downloaded to ~/.cache/huggingface/hub/')
"
# Copy the cache directory to your offline robot:
# On source machine:
tar -czf da3_models.tar.gz -C ~/.cache/huggingface .
# On target robot (via USB drive, SCP, etc.):
mkdir -p ~/.cache/huggingface
tar -xzf da3_models.tar.gz -C ~/.cache/huggingface/Alternatively, set a custom cache directory:
# Download to specific location
export HF_HOME=/path/to/models
python3 -c "from transformers import AutoModelForDepthEstimation; \
AutoModelForDepthEstimation.from_pretrained('depth-anything/DA3-BASE')"
# On robot, point to the same location
export HF_HOME=/path/to/models
ros2 launch depth_anything_3_ros2 depth_anything_3.launch.pyAvailable Models:
depth-anything/DA3-SMALL- Fastest, ~1.5GB downloaddepth-anything/DA3-BASE- Balanced, ~2.5GB downloaddepth-anything/DA3-LARGE- Best quality, ~4GB downloaddepth-anything/DA3-GIANT- Maximum quality, ~6.5GB download
- Important: Dependencies and Model Downloads
- Installation
- Quick Start
- Configuration
- Usage Examples
- Docker Deployment
- Example Images and Benchmarks
- Performance
- Documentation
- Troubleshooting
- Development
- Citation
- License
- ROS2 Humble on Ubuntu 22.04:
# If not already installed
sudo apt update
sudo apt install ros-humble-desktop- CUDA 12.x (optional, for GPU acceleration):
# For Jetson Orin AGX, this comes with JetPack 6.x
# For desktop systems, install CUDA Toolkit from NVIDIA
nvidia-smi # Verify CUDA installation- Internet Connection (for initial setup):
- Required for Step 2 (pip install of DA3 package)
- Required for Step 5 (model weights download from Hugging Face Hub)
- See Offline Operation if deploying to robots without internet
sudo apt install -y \
ros-humble-cv-bridge \
ros-humble-sensor-msgs \
ros-humble-std-msgs \
ros-humble-image-transport \
ros-humble-rclpy# Create and activate a virtual environment (recommended)
python3 -m venv ~/da3_venv
source ~/da3_venv/bin/activate
# Install PyTorch with CUDA support
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu121
# Install other dependencies
pip3 install transformers>=4.35.0 \
huggingface-hub>=0.19.0 \
opencv-python>=4.8.0 \
pillow>=10.0.0 \
numpy>=1.24.0 \
timm>=0.9.0
# Install ByteDance DA3 Python API (pip handles cloning automatically)
# This provides the model inference code, NOT the pre-trained weights
# Model weights will download from Hugging Face Hub on first run
pip3 install git+https://github.com/ByteDance-Seed/Depth-Anything-3.gitNote: For CPU-only systems, install PyTorch without CUDA:
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cpu# Navigate to your ROS2 workspace
cd ~/ros2_ws/src # Or create: mkdir -p ~/ros2_ws/src && cd ~/ros2_ws/src
# Clone THIS ROS2 wrapper repository (not the ByteDance DA3 repo)
git clone https://github.com/GerdsenAI/GerdsenAI-Depth-Anything-3-ROS2-Wrapper.git
# Build the package
cd ~/ros2_ws
colcon build --packages-select depth_anything_3_ros2
# Source the workspace
source install/setup.bash# Test that the package is found
ros2 pkg list | grep depth_anything_3_ros2
# Run tests (optional)
colcon test --packages-select depth_anything_3_ros2
colcon test-result --verbosePre-download models to avoid delays on first run. This step is REQUIRED if deploying to offline robots.
# Download a specific model (requires internet connection)
python3 -c "
from transformers import AutoImageProcessor, AutoModelForDepthEstimation
print('Downloading DA3-BASE model...')
AutoImageProcessor.from_pretrained('depth-anything/DA3-BASE')
AutoModelForDepthEstimation.from_pretrained('depth-anything/DA3-BASE')
print('Model cached to ~/.cache/huggingface/hub/')
print('You can now run offline!')
"
# For offline robots, copy the cache:
# tar -czf da3_models.tar.gz -C ~/.cache/huggingface .
# Transfer da3_models.tar.gz to robot and extract:
# tar -xzf da3_models.tar.gz -C ~/.cache/huggingface/Alternative models:
- For faster inference: Replace
DA3-BASEwithDA3-SMALL - For best quality: Replace
DA3-BASEwithDA3-LARGE
See Dependencies and Model Downloads for complete offline deployment instructions.
The fastest way to get started is with a standard USB camera:
# Terminal 1: Launch USB camera driver
ros2 run v4l2_camera v4l2_camera_node --ros-args \
-p image_size:="[640,480]" \
-r __ns:=/camera
# Terminal 2: Launch Depth Anything 3
ros2 launch depth_anything_3_ros2 depth_anything_3.launch.py \
image_topic:=/camera/image_raw \
model_name:=depth-anything/DA3-BASE \
device:=cuda
# Terminal 3: Visualize with RViz2
rviz2 -d $(ros2 pkg prefix depth_anything_3_ros2)/share/depth_anything_3_ros2/rviz/depth_view.rviz# USB camera example (requires v4l2_camera)
ros2 launch depth_anything_3_ros2 usb_camera_example.launch.py
# Static image test (requires image_publisher)
ros2 launch depth_anything_3_ros2 image_publisher_test.launch.py \
image_path:=/path/to/your/test_image.jpgAll parameters can be configured via launch files or command line:
| Parameter | Type | Default | Description |
|---|---|---|---|
model_name |
string | depth-anything/DA3-BASE |
Hugging Face model ID or local path |
device |
string | cuda |
Inference device (cuda or cpu) |
cache_dir |
string | "" |
Model cache directory (empty for default) |
inference_height |
int | 518 |
Height for inference (model input) |
inference_width |
int | 518 |
Width for inference (model input) |
input_encoding |
string | bgr8 |
Expected input encoding (bgr8 or rgb8) |
normalize_depth |
bool | true |
Normalize depth to [0, 1] range |
publish_colored |
bool | true |
Publish colorized depth visualization |
publish_confidence |
bool | true |
Publish confidence map |
colormap |
string | turbo |
Colormap for visualization |
queue_size |
int | 1 |
Subscriber queue size |
log_inference_time |
bool | false |
Log performance metrics |
| Model | Parameters | Use Case |
|---|---|---|
depth-anything/DA3-SMALL |
0.08B | Fast inference, lower accuracy |
depth-anything/DA3-BASE |
0.12B | Balanced performance (recommended) |
depth-anything/DA3-LARGE |
0.35B | Higher accuracy |
depth-anything/DA3-GIANT |
1.15B | Best accuracy, slower |
depth-anything/DA3NESTED-GIANT-LARGE |
Combined | Metric scale reconstruction |
~/image_raw(sensor_msgs/Image): Input RGB image from camera~/camera_info(sensor_msgs/CameraInfo): Optional camera intrinsics
~/depth(sensor_msgs/Image): Depth map (32FC1 encoding)~/depth_colored(sensor_msgs/Image): Colorized depth visualization (BGR8)~/confidence(sensor_msgs/Image): Confidence map (32FC1)~/depth/camera_info(sensor_msgs/CameraInfo): Camera info for depth image
Complete example with a standard USB webcam:
# Install v4l2_camera if not already installed
sudo apt install ros-humble-v4l2-camera
# Launch everything together
ros2 launch depth_anything_3_ros2 usb_camera_example.launch.py \
video_device:=/dev/video0 \
model_name:=depth-anything/DA3-BASEConnect to a ZED camera (requires separate ZED ROS2 wrapper installation):
# Launch ZED camera separately
ros2 launch zed_wrapper zed_camera.launch.py camera_model:=zedxm
# In another terminal, launch depth estimation with topic remapping
ros2 launch depth_anything_3_ros2 depth_anything_3.launch.py \
image_topic:=/zed/zed_node/rgb/image_rect_color \
camera_info_topic:=/zed/zed_node/rgb/camera_infoOr use the provided example:
ros2 launch depth_anything_3_ros2 zed_camera_example.launch.py \
camera_model:=zedxmConnect to a RealSense camera (requires realsense-ros):
# Launch RealSense camera
ros2 launch realsense2_camera rs_launch.py
# Launch depth estimation
ros2 launch depth_anything_3_ros2 realsense_example.launch.pyRun depth estimation on 4 cameras simultaneously:
# Launch multi-camera setup
ros2 launch depth_anything_3_ros2 multi_camera.launch.py \
camera_namespaces:="cam1,cam2,cam3,cam4" \
image_topics:="/cam1/image_raw,/cam2/image_raw,/cam3/image_raw,/cam4/image_raw" \
model_name:=depth-anything/DA3-BASETest with a static image using image_publisher:
sudo apt install ros-humble-image-publisher
ros2 launch depth_anything_3_ros2 image_publisher_test.launch.py \
image_path:=/path/to/test_image.jpg \
model_name:=depth-anything/DA3-BASESwitch between models for different performance/accuracy tradeoffs:
# Fast inference (DA3-Small)
ros2 launch depth_anything_3_ros2 depth_anything_3.launch.py \
model_name:=depth-anything/DA3-SMALL \
image_topic:=/camera/image_raw
# Best accuracy (DA3-Giant) - requires more GPU memory
ros2 launch depth_anything_3_ros2 depth_anything_3.launch.py \
model_name:=depth-anything/DA3-GIANT \
image_topic:=/camera/image_rawRun on systems without CUDA:
ros2 launch depth_anything_3_ros2 depth_anything_3.launch.py \
image_topic:=/camera/image_raw \
model_name:=depth-anything/DA3-BASE \
device:=cpuUse a custom parameter file:
# Create custom config file
cat > my_config.yaml <<EOF
depth_anything_3:
ros__parameters:
model_name: "depth-anything/DA3-LARGE"
device: "cuda"
normalize_depth: true
publish_colored: true
colormap: "viridis"
log_inference_time: true
EOF
# Launch with custom config
ros2 run depth_anything_3_ros2 depth_anything_3_node --ros-args \
--params-file my_config.yaml \
-r ~/image_raw:=/camera/image_rawDocker configuration files are provided for building and deploying on both CPU and GPU systems.
Important: No pre-built Docker images are published to Docker Hub or any container registry. You must build the images locally using
docker-compose buildordocker-compose up(which auto-builds).
# Step 1: Clone the repository
git clone https://github.com/GerdsenAI/GerdsenAI-Depth-Anything-3-ROS2-Wrapper.git
cd GerdsenAI-Depth-Anything-3-ROS2-Wrapper
# Step 2: Build and run (choose GPU or CPU)
docker-compose up -d depth-anything-3-gpu # For GPU (requires nvidia-docker)
# OR
docker-compose up -d depth-anything-3-cpu # For CPU-only
# Step 3: Enter container and run the node
docker exec -it da3_ros2_gpu bash # For GPU container
# OR
docker exec -it da3_ros2_cpu bash # For CPU container
# Inside the container:
ros2 run depth_anything_3_ros2 depth_anything_3_node --ros-args -p device:=cuda# CPU-only mode
docker-compose up -d depth-anything-3-cpu
docker exec -it da3_ros2_cpu bash
# GPU mode (requires nvidia-docker)
docker-compose up -d depth-anything-3-gpu
docker exec -it da3_ros2_gpu bash
# Development mode (source mounted)
docker-compose up -d depth-anything-3-dev# Build GPU image
docker build -t depth_anything_3_ros2:gpu \
--build-arg BUILD_TYPE=cuda-base \
.
# Run with USB camera
docker run -it --rm \
--runtime=nvidia \
--gpus all \
--network host \
--privileged \
-v /dev:/dev:rw \
depth_anything_3_ros2:gpuThe docker-compose.yml includes:
depth-anything-3-cpu: CPU-only deploymentdepth-anything-3-gpu: GPU-accelerated deploymentdepth-anything-3-dev: Development environmentdepth-anything-3-usb-camera: Standalone USB camera service
Automated test suite for validating Docker images:
cd docker
chmod +x test_docker.sh
./test_docker.shThis comprehensive test suite validates:
- Docker and Docker Compose installation
- CPU and GPU image builds
- ROS2 installation and package builds
- Python dependencies
- CUDA availability (GPU images)
- Volume mounts and networking
- Model download capability
For detailed Docker documentation, see docker/README.md.
Download sample images for quick testing:
cd examples
./scripts/download_samples.shThis downloads sample indoor, outdoor, and object images from public datasets.
# Test single image
python3 examples/scripts/test_with_images.py \
--image examples/images/outdoor/street_01.jpg \
--model depth-anything/DA3-BASE \
--device cuda \
--output-dir results/
# Batch process directory
python3 examples/scripts/test_with_images.py \
--input-dir examples/images/outdoor/ \
--output-dir results/ \
--model depth-anything/DA3-BASERun comprehensive benchmarks across multiple models and image sizes:
# Benchmark multiple models
python3 examples/scripts/benchmark.py \
--images examples/images/ \
--models depth-anything/DA3-SMALL,depth-anything/DA3-BASE,depth-anything/DA3-LARGE \
--sizes 640x480,1280x720 \
--device cuda \
--output benchmark_results.jsonExample output:
================================================================================
BENCHMARK SUMMARY
================================================================================
Model Device Size FPS Time (ms) GPU Mem (MB)
--------------------------------------------------------------------------------
depth-anything/DA3-SMALL cuda 640x480 25.3 39.5 1512
depth-anything/DA3-BASE cuda 640x480 19.8 50.5 2489
depth-anything/DA3-LARGE cuda 640x480 11.7 85.4 3952
================================================================================
Apply filtering, hole filling, and enhancement to depth maps:
cd examples/scripts
# Process single depth map
python3 depth_postprocess.py \
--input depth.npy \
--output processed.npy \
--visualize
# Batch process directory
python3 depth_postprocess.py \
--input depth_dir/ \
--output processed_dir/ \
--batchSynchronize depth estimation from multiple cameras:
# Terminal 1: Launch multi-camera setup
ros2 launch depth_anything_3_ros2 multi_camera.launch.py \
camera_namespaces:=cam_left,cam_right \
image_topics:=/cam_left/image_raw,/cam_right/image_raw
# Terminal 2: Run synchronizer
python3 multi_camera_sync.py \
--cameras cam_left cam_right \
--sync-threshold 0.05 \
--output synchronized_depth/Optimize models for maximum performance on Jetson platforms:
# Optimize model
python3 optimize_tensorrt.py \
--model depth-anything/DA3-BASE \
--output da3_base_trt.pth \
--precision fp16 \
--benchmark
# Expected speedup: 2-3x faster inferenceQuantization, ONNX export, and profiling:
# INT8 quantization
python3 performance_tuning.py quantize \
--model depth-anything/DA3-BASE \
--output da3_base_int8.pth
# Export to ONNX
python3 performance_tuning.py export-onnx \
--model depth-anything/DA3-BASE \
--output da3_base.onnx \
--benchmark
# Profile layers
python3 performance_tuning.py profile \
--model depth-anything/DA3-BASE \
--layers \
--memoryProcess ROS2 bags through depth estimation:
./ros2_batch_process.sh \
-i ./raw_bags \
-o ./depth_bags \
-m depth-anything/DA3-BASE \
-d cudaProfile ROS2 node performance:
python3 profile_node.py \
--model depth-anything/DA3-BASE \
--device cuda \
--duration 60For more examples, see examples/README.md.
Complete documentation is available in multiple formats:
Build and view the complete API documentation:
cd docs
pip install -r requirements.txt
make html
open build/html/index.html # or xdg-open on Linux-
API Reference: Complete API documentation with examples
-
User Guides:
- Installation and setup
- Camera integration guide
- Multi-camera configuration
- Performance optimization
- Troubleshooting
-
Tutorials:
- Quick Start Tutorial - Get up and running in minutes
- USB Camera Setup - Complete USB camera guide
- Multi-Camera Setup - Synchronized multi-camera depth
- Performance Tuning - Optimization guide for all platforms
Tested with 640x480 input images:
| Model | FPS | Inference Time | GPU Memory |
|---|---|---|---|
| DA3-Small | ~25 FPS | ~40ms | ~1.5 GB |
| DA3-Base | ~20 FPS | ~50ms | ~2.5 GB |
| DA3-Large | ~12 FPS | ~85ms | ~4.0 GB |
| DA3-Giant | ~6 FPS | ~165ms | ~6.5 GB |
- TensorRT Optimization (Jetson platforms):
cd examples/scripts
python3 optimize_tensorrt.py --model depth-anything/DA3-BASE \
--output da3_base_trt.pth --precision fp16
# Expected: 2-3x speedup- INT8 Quantization for faster inference:
python3 performance_tuning.py quantize \
--model depth-anything/DA3-BASE --output da3_base_int8.pth
# 50-75% smaller, 20-40% faster- Reduce Input Resolution: Lower resolution images process faster
--param inference_height:=384 inference_width:=512-
Use Smaller Models: DA3-SMALL offers best speed, DA3-BASE balances speed/accuracy
-
Queue Size: Set to 1 to always process latest frame
--param queue_size:=1- Disable Unused Outputs: Save processing time
--param publish_colored_depth:=false
--param publish_confidence:=false-
Multiple Cameras: Each camera runs in separate process with shared GPU
-
Performance Profiling: Profile to identify bottlenecks
python3 examples/scripts/profile_node.py --model depth-anything/DA3-BASEFor comprehensive optimization guide, see Performance Tuning Tutorial.
Error: Failed to load model from Hugging Face Hub or Connection timeout
Solutions:
- Check internet connection:
ping huggingface.co - Verify Hugging Face Hub is accessible: May be blocked by firewall/proxy
- Pre-download models manually:
python3 -c "from transformers import AutoImageProcessor, AutoModelForDepthEstimation; \ AutoImageProcessor.from_pretrained('depth-anything/DA3-BASE'); \ AutoModelForDepthEstimation.from_pretrained('depth-anything/DA3-BASE')"
- Use custom cache directory: Set
HF_HOME=/path/to/modelsenvironment variable - For offline robots: See Offline Operation section
Error: Model depth-anything/DA3-BASE not found on robot without internet
Solution: Pre-download models and copy cache directory:
# On development machine WITH internet:
python3 -c "from transformers import AutoModelForDepthEstimation; \
AutoModelForDepthEstimation.from_pretrained('depth-anything/DA3-BASE')"
tar -czf da3_models.tar.gz -C ~/.cache/huggingface .
# Transfer to robot (USB, SCP, etc.) and extract:
ssh robot@robot-ip
mkdir -p ~/.cache/huggingface
tar -xzf da3_models.tar.gz -C ~/.cache/huggingface/Verify models are available:
ls ~/.cache/huggingface/hub/models--depth-anything--*Error: RuntimeError: CUDA out of memory
Solutions:
- Use a smaller model (DA3-Small or DA3-Base)
- Reduce input resolution
- Close other GPU applications
- Switch to CPU mode temporarily
# Use smaller model
ros2 launch depth_anything_3_ros2 depth_anything_3.launch.py \
model_name:=depth-anything/DA3-SMALL
# Or use CPU
ros2 launch depth_anything_3_ros2 depth_anything_3.launch.py \
device:=cpuError: Failed to load model from Hugging Face Hub
Solutions:
- Check internet connection
- Verify Hugging Face Hub is accessible
- Download model manually and use local path
# Download manually
python3 -c "from huggingface_hub import snapshot_download; snapshot_download('depth-anything/DA3-BASE')"
# Use local path
ros2 launch depth_anything_3_ros2 depth_anything_3.launch.py \
model_name:=/path/to/local/modelError: CV Bridge conversion failed
Solutions:
- Check camera's output encoding
- Adjust
input_encodingparameter
# For RGB cameras
--param input_encoding:=rgb8
# For BGR cameras (most common)
--param input_encoding:=bgr8Solutions:
- Verify camera is publishing:
ros2 topic echo /camera/image_raw - Check topic remapping is correct
- Verify QoS settings match camera
# List available topics
ros2 topic list | grep image
# Check topic info
ros2 topic info /camera/image_rawSolutions:
- Check GPU utilization:
nvidia-smi - Enable performance logging
- Reduce image resolution
- Use smaller model
# Enable performance logging
--param log_inference_time:=true# Run all tests
cd ~/ros2_ws
colcon test --packages-select depth_anything_3_ros2
# View test results
colcon test-result --verbose
# Run specific test
python3 -m pytest src/depth_anything_3_ros2/test/test_inference.py -vThis package follows:
- PEP 8 for Python code
- Google-style docstrings
- Type hints for all functions
- No emojis in code or documentation
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Follow code style guidelines
- Add tests for new functionality
- Submit a pull request
If you use Depth Anything 3 in your research, please cite the original paper:
@article{depthanything3,
title={Depth Anything 3: A New Foundation for Metric and Relative Depth Estimation},
author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Zhao, Zhen and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang},
journal={arXiv preprint arXiv:2511.10647},
year={2024}
}This ROS2 wrapper is released under the MIT License.
The Depth Anything 3 model has its own license. Please refer to the official repository for model license information.
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- ROS2 Documentation: ROS2 Humble Docs
- Depth Anything 3: Official Repository
Note: This is an unofficial ROS2 wrapper. For the official Depth Anything 3 implementation, please visit the ByteDance-Seed repository.