ReachOne started as a vision—a response to watching communities struggle in the critical hours after disasters strike. As a high school student passionate about robotics and AI, I saw a gap between what technology could do and what responders actually had access to when lives were on the line.
My inspiration comes from understanding that every minute matters in disaster response. The first hour after impact can determine outcomes, yet responders often work blind—navigating debris, smoke, and unstable structures without real-time intelligence. ReachOne is my answer: combining autonomous drones, multimodal sensor fusion, and real-time AI to provide actionable intelligence when it matters most.
Right now, I'm building ReachOne from prototype to startup. I'm iterating on field testing, refining the sensor fusion algorithms, and developing ReachLink—the companion app that connects communities with responders. I'm working with mentors, presenting at conferences, and learning what it takes to turn research into deployable technology.
Where I hope this goes: I envision ReachOne becoming a trusted platform for disaster response organizations worldwide. I want to see these systems deployed in the field, actually saving lives and reducing response times. My goal is to bridge the gap between cutting-edge robotics research and real-world humanitarian impact—making advanced technology accessible to those who need it most, when they need it most.
Disaster-response robotics + AI for faster, safer post-disaster search and rescue
ReachOne is an integrated system combining autonomous UAV platforms, multimodal sensor fusion (LiDAR + thermal imaging), and real-time machine learning to provide actionable intelligence in the critical first hours after natural disasters. By detecting survivors, mapping hazards, and coordinating response resources, ReachOne aims to reduce response time and improve outcomes for communities facing catastrophic events.
ReachOne aims to close the first-hour gap between impact and actionable rescue intelligence.
Autonomous drone for mapping & coverage
3D mapping & structure detection
Jetson-class real-time inference
Heat signature detection
LiDAR + Thermal → Detection & Mapping
Community alerts & coordination
LiDAR provides precise 3D structural mapping and can penetrate smoke and dust, while thermal imaging detects human heat signatures even when visual identification is impossible. Fusion combines these modalities: LiDAR identifies potential void spaces and structural hazards, while thermal confirms human presence. The combined tensor improves detection robustness—where one sensor fails (e.g., thermal in high ambient heat), the other compensates (LiDAR structure analysis).
The detection pipeline uses CNN-based models trained on multimodal datasets. Fusion tensors combine LiDAR point clouds with thermal image patches, enabling the model to learn spatial-thermal correlations. The system outputs bounding boxes for survivors, hazard classifications (unstable structures, fire zones, blocked paths), and confidence scores. Training emphasizes robustness across lighting conditions, debris types, and environmental variations encountered in real disaster scenarios.
In GPS-denied and rapidly changing environments, the system builds local maps using SLAM (Simultaneous Localization and Mapping) techniques. LiDAR scans are registered and fused with IMU/GPS data (when available) to create real-time 3D maps. The mapping pipeline tracks changes over time—identifying new collapses, cleared paths, or shifting hazards—enabling dynamic route planning for responders. This is critical because disaster sites evolve continuously; a map from 30 minutes ago may be obsolete.
Design goal: near-real-time inference (< 2 seconds from sensor capture to actionable output) with high reliability. "Almost working isn't enough" in disaster response—false positives waste resources, false negatives cost lives. The system runs inference on edge compute (Jetson-class hardware) to avoid network dependency. Redundancy checks, confidence thresholds, and fallback modes ensure the system degrades gracefully if sensors fail or conditions exceed design parameters.
Development follows test-fail-learn cycles. Each iteration addresses specific failure modes identified in controlled and field testing:
Field validation of the ReachOne system involved deploying UAV platforms equipped with LiDAR sensors in controlled disaster simulation environments. Testing focused on validating detection algorithms in realistic conditions: low visibility scenarios, cluttered debris fields, and varying lighting conditions. The drone platform enabled rapid coverage of large areas while collecting high-resolution 3D point cloud data for offline and real-time processing.
ReachLink is a humanitarian coordination app that complements the UAV sensor platform by enabling community-driven intelligence and resource coordination.
AI enables intelligent prioritization of alerts based on severity and proximity, optimal routing of responders considering real-time hazard maps, de-duplication of reports to reduce information overload, and triage assistance for medical resources. The system learns from patterns across multiple disaster events to improve coordination effectiveness over time.
Poster presentation
Presentation
Prototype testing
ReachOne is where my interests in robotics, AI, and humanitarian systems come together.