Abstract— This article presents an efficient and mature visionbased navigation algorithm based on a sensory-motor learning. Neither Cartesian nor topological map is required, but a set of biologically inspired place cells. Each place cell defines a location by a spatial constellation of online learned landmarks. Their activity provides an internal measure of localization. A simple set of place-action associations enable a robot to go back to a learned location or to follow an arbitrary visual path. The system is able to achieve sensory-motor tasks in indoor as well as in large outdoor environments with similar computation load. The behavior is robust to kidnapping, object and landmark addition or removal, presence of mobile obstacles and severe visual field occlusions.