Abstract— This paper presents an algorithm which can effectively constrain inertial navigation drift using monocular camera data. It is capable of operating in unknown and large scale environments and assumes no prior knowledge of the size, appearance or location of potential environmental features. Low cost inertial navigation units are found on most autonomous vehicles and a large number of smaller robots. Depending on the grade of the sensor, when used alone, inertial data for control and navigation will only be reliable for a matter of seconds or minutes. An algorithm is presented that simultaneously estimates relative feature location in sensor space and inertial position, velocity and attitude in world coordinates. Feature locations are maintained in sensor space to ensure measurement linearity. Image depth is represented by an inverse function which permits un-delayed feature initialization and improves linearity and convergence. It is shown that the resulting navigation solut...