We describe a set of image measurements which are invariant to the camera internals but are location variant. We show that using these measurements it is possible to calculate the self-localization of a robot using known landmarks and uncalibrated cameras. We also show that it is possible to compute, using uncalibrated cameras, the Euclidean structure of 3D world points using multiple views from known positions. We are free to alter the internal parameters of the camera during these operations. Our initial experiments demonstrate the applicability of the method.