An egomotion estimator that makes use of the complementary strengths of inertial and visual readings is introduced in the context of efficient and robust realtime motion estimation. Our work is targeted to a noisy, computationally limited and unconstrained environment. Experimental results show that the proposed technique, despite being built upon heuristics that often privilege speed over exhaustiveness, produces extremely precise and robust egomotion estimates in situations that cripple visual-only and inertial-only trackers.