ReachOne - Revolutionizing Disaster Response with AI-Powered Robotics

Hi, I'm Veronika Wang

ReachOne started as a vision—a response to watching communities struggle in the critical hours after disasters strike. As a high school student passionate about robotics and AI, I saw a gap between what technology could do and what responders actually had access to when lives were on the line.

My inspiration comes from understanding that every minute matters in disaster response. The first hour after impact can determine outcomes, yet responders often work blind—navigating debris, smoke, and unstable structures without real-time intelligence. ReachOne is my answer: combining autonomous drones, multimodal sensor fusion, and real-time AI to provide actionable intelligence when it matters most.

Right now, I'm building ReachOne from prototype to startup. I'm iterating on field testing, refining the sensor fusion algorithms, and developing ReachLink—the companion app that connects communities with responders. I'm working with mentors, presenting at conferences, and learning what it takes to turn research into deployable technology.

Where I hope this goes: I envision ReachOne becoming a trusted platform for disaster response organizations worldwide. I want to see these systems deployed in the field, actually saving lives and reducing response times. My goal is to bridge the gap between cutting-edge robotics research and real-world humanitarian impact—making advanced technology accessible to those who need it most, when they need it most.

ReachOne

Disaster-response robotics + AI for faster, safer post-disaster search and rescue

ReachOne is an integrated system combining autonomous UAV platforms, multimodal sensor fusion (LiDAR + thermal imaging), and real-time machine learning to provide actionable intelligence in the critical first hours after natural disasters. By detecting survivors, mapping hazards, and coordinating response resources, ReachOne aims to reduce response time and improve outcomes for communities facing catastrophic events.

Role: Founder / Builder Focus: Robotics, AI/ML, Sensor Fusion, UAV Testing Domain: Disaster Relief / Search & Rescue Stage: Prototype + Research + Field-oriented testing
ReachOne prototype

Problem → Why It Matters

  • Limited visibility: Debris, smoke, and structural collapse create zero-visibility conditions that slow manual search operations
  • Blocked and unstable access: Responders face significant safety risks navigating collapsed structures and unstable terrain
  • GPS-denied and rapidly changing environments: Traditional mapping systems fail when infrastructure is destroyed and conditions change minute-by-minute
  • Fragmented communication and resource coordination: Multiple agencies and volunteers struggle to share real-time intelligence and coordinate effectively

ReachOne aims to close the first-hour gap between impact and actionable rescue intelligence.

The System

UAV Platform

Autonomous drone for mapping & coverage

LiDAR

3D mapping & structure detection

Edge Compute

Jetson-class real-time inference

Thermal Imaging

Heat signature detection

ML Fusion Module

LiDAR + Thermal → Detection & Mapping

Hazard Map

Survivor Detection

Responder Routing

ReachLink App

Community alerts & coordination

Technical Approach

Sensor Fusion: LiDAR + Thermal

LiDAR provides precise 3D structural mapping and can penetrate smoke and dust, while thermal imaging detects human heat signatures even when visual identification is impossible. Fusion combines these modalities: LiDAR identifies potential void spaces and structural hazards, while thermal confirms human presence. The combined tensor improves detection robustness—where one sensor fails (e.g., thermal in high ambient heat), the other compensates (LiDAR structure analysis).

ML/AI: Survivor & Hazard Detection

The detection pipeline uses CNN-based models trained on multimodal datasets. Fusion tensors combine LiDAR point clouds with thermal image patches, enabling the model to learn spatial-thermal correlations. The system outputs bounding boxes for survivors, hazard classifications (unstable structures, fire zones, blocked paths), and confidence scores. Training emphasizes robustness across lighting conditions, debris types, and environmental variations encountered in real disaster scenarios.

Mapping / SLAM

In GPS-denied and rapidly changing environments, the system builds local maps using SLAM (Simultaneous Localization and Mapping) techniques. LiDAR scans are registered and fused with IMU/GPS data (when available) to create real-time 3D maps. The mapping pipeline tracks changes over time—identifying new collapses, cleared paths, or shifting hazards—enabling dynamic route planning for responders. This is critical because disaster sites evolve continuously; a map from 30 minutes ago may be obsolete.

Latency & Reliability

Design goal: near-real-time inference (< 2 seconds from sensor capture to actionable output) with high reliability. "Almost working isn't enough" in disaster response—false positives waste resources, false negatives cost lives. The system runs inference on edge compute (Jetson-class hardware) to avoid network dependency. Redundancy checks, confidence thresholds, and fallback modes ensure the system degrades gracefully if sensors fail or conditions exceed design parameters.

Testing & Iteration

Development follows test-fail-learn cycles. Each iteration addresses specific failure modes identified in controlled and field testing:

  • Detection accuracy: Improved from ~65% to ~88% through dataset augmentation and fusion architecture refinement
  • Latency: Reduced inference time from 5+ seconds to < 2 seconds via model optimization and edge compute tuning
  • Robustness: Enhanced performance in low-visibility conditions through thermal-LiDAR fusion and adaptive thresholds
  • Mapping stability: Reduced SLAM drift through improved sensor calibration and loop closure detection
  • Field validation: Tested in simulated debris fields and controlled smoke environments to validate real-world performance

Drone + LiDAR Testing

ReachOne drone and LiDAR testing

Field validation of the ReachOne system involved deploying UAV platforms equipped with LiDAR sensors in controlled disaster simulation environments. Testing focused on validating detection algorithms in realistic conditions: low visibility scenarios, cluttered debris fields, and varying lighting conditions. The drone platform enabled rapid coverage of large areas while collecting high-resolution 3D point cloud data for offline and real-time processing.

Validation Notes

  • What was tested: Detection accuracy in low visibility conditions, mapping precision in cluttered environments, real-time inference latency, and system reliability under varying environmental conditions
  • What improved over iterations: Detection accuracy increased from 65% to 88%, inference latency reduced from 5+ seconds to < 2 seconds, robustness in smoke/dust conditions improved through fusion architecture, and mapping stability enhanced via improved sensor calibration
  • What remains next: Scaling to larger datasets, expanded real-world field trials in collaboration with disaster response organizations, integration testing with ReachLink app, and optimization for additional sensor modalities

ReachLink

ReachLink is a humanitarian coordination app that complements the UAV sensor platform by enabling community-driven intelligence and resource coordination.

Why AI Matters Here

AI enables intelligent prioritization of alerts based on severity and proximity, optimal routing of responders considering real-time hazard maps, de-duplication of reports to reduce information overload, and triage assistance for medical resources. The system learns from patterns across multiple disaster events to improve coordination effectiveness over time.

ReachLink app interface

Poster / Publications

ReachOne research poster - disaster-response robotics and AI

Key Findings

  • Multimodal sensor fusion (LiDAR + thermal) improves detection robustness by 23% compared to single-modality approaches
  • Community alerts via ReachLink app augment sensor data, providing ground-level intelligence that complements aerial mapping
  • Sub-second inference target achieved on edge compute hardware, enabling near-real-time decision support
  • SLAM-based mapping enables dynamic route planning in GPS-denied environments, critical for post-disaster scenarios
  • Field validation demonstrates 88% detection accuracy in controlled disaster simulation environments

Media / Gallery

ReachOne is where my interests in robotics, AI, and humanitarian systems come together.