Wearable Navigation Aid for the Visually Impaired
Overview
This document outlines a proposed wearable solution for guiding blind or visually impaired individuals through
complex urban environments. It integrates real-time obstacle and hazard detection with spatial haptic
feedback, all in a compact AR-style visor or smart glasses form factor.
Core Concept
- Camera-equipped smart visor or glasses
- AI-based object and hazard detection
- 180-degree spatial vibration feedback
- Hands-free, intuitive guidance system
Design Considerations
While traditional smart glasses offer convenience, their size constraints limit their potential. An AR-style visor
provides space for wider FOV cameras, distributed vibration motors, and a larger battery, while maintaining
comfort and usability.
Key Hardware Components
- Wide-angle camera (120-180 degree FOV)
- Vibration motors (ERM or LRA)
- Microprocessor (ESP32, Raspberry Pi Zero 2, or Jetson Nano)
- Battery (Li-Po 1000mAh or higher)
- Bone conduction audio (optional)
- Bluetooth/Wi-Fi module
Software and Features
- Real-time object detection (e.g., YOLOv8, MobileNet SSD)
- Hazard recognition (vehicles, stairs, obstacles)
- Depth estimation using AI models like MiDaS
- Spatial vibration mapping (left/middle/right)
- Optional voice feedback and navigation integration
Prototype Development Plan
Phase 1: Headband with camera + 3 motors and basic detection
Phase 2: Add AI models and monocular depth estimation
Phase 3: Develop wearable visor with custom PCB and full haptic array
Final Thoughts
An AR-style visor provides a more practical and scalable solution for blind navigation than traditional smart
glasses. It enables real-time awareness through vibration without blocking hearing or requiring user input,
paving the way for greater autonomy and safety.