This paper presents novel methods for increasing the robustness of visual tracking systems by incorporating information from inertial sensors. We show that more can be achieved than simply combining the sensor data within a statistical filter. In particular we show how, in addition to using inertial data to provide predictions for the visual sensor, this data can also be used to provide an estimate of motion blur for each feature and this can be used to dynamically tune the parameters of each feature detector in the visual sensor. This allows the system to obtain useful information from the visual sensor even in the presence of substantial motion blur. Finally, the visual sensor can be used to calibrate the parameters of the inertial sensor to eliminate drift.
Georg S. W. Klein, Tom Drummond