A novel group theoretical method is proposed for autonomous navigation based on a spherical image camera. The environment of a robot is captured on a sphere. The three dimensional scenes at two different points in the space are related by a transformation from the special Euclidean motion group which is the semi-direct product of the rotation and the translation groups. The motion of the robot is recovered by iteratively estimating the rotation and the translation in an ExpectationMaximization fashion.