We present a vision-based method that assists human
navigation within unfamiliar environments. Our main contribution
is a novel algorithm that learns the correlation between
user egomotion and feature matches on a wearable
set of uncalibrated cameras. The primary advantage of this
method is that it provides robust guidance cues in the user’s
body frame, and is tolerant to small changes in the camera
configuration. We couple this method with a topological
mapping algorithm that provides global localization within
the traversed environment. We validate our approach with
ground-truth experiments and demonstrate the method on
several real-world datasets spanning two kilometers of indoor
and outdoor walking excursions.