Abstract— Mobile robots can be easily equipped with numerous sensors which can aid in the tasks of localization and ego-motion estimation. Two such examples are Inertial Measurement Units (IMU), which provide a gravity vector via pitch and roll angular velocities, and wide-angle or panoramic imaging devices which capture 360◦ field-of-view images. As the number of powerful devices on a single robot increases, an important problem arises in how to fuse the information coming from multiple sources to obtain an accurate and efficient motion estimate. The IMU provides real-time readings which can be employed in orientation estimation, while in principle an Omnidirectional camera provides enough information to estimate the full rigid motion (up to translational scale). However, in addition to being computationally overwhelming, such an estimation is traditionally based on the sensitive search for feature correspondences between image frames. In this paper we present a novel algorithm ...