In a recent paper we have presented a method for image-based navigation by which a robot can navigate to desired positions and orientations in 3-0 space specified by single images taken from these positions. In this paper we further investigate the method and develop robust algorithms for navigation assuming the perspective projection model. In particular, we develop a tracking algorithm that exploits our knowledge of the motion performed by the robot at every step. This algorithm allows us to maintain correspondences between frames and eliminate false correspondences. We combine this tracking algorithm with an iterative optimization procedure to accurately recover the displacement of the robot from the target. Our method for navigation is attractive since at does not require a 3-0 model of the environment. We demonstrate the robustness of our method by applying it to a six degree of freedom robot arm.