— In this paper we present a novel approach to perform indoor self-localization using reference omnidirectional images. We only need one omnidirectional image of the whole scene stored in the robot memory and a conventional uncalibrated on-board camera. We match the omnidirectional image and the conventional images captured by the on-board camera and compute the hybrid epipolar geometry using lifted coordinates and robust techniques. We map the epipole in the reference omnidirectional image to a ground plane through a homography in lifted coordinates also, giving the position of the robot in the planar ground, and its uncertainty. We perform experiments with simulated and real data to show the feasibility of this new self-localization approach.