We present a method that using only an uncalibrated camera allows the capture of object geometry and appearance, and then at a later stage registration and AR overlay into a new scene. Using only image information first a coarse object geometry is obtained using structure-from-motion, then a dynamic, view dependent texture is estimated to account for the differences between the reprojected coarse model and the training images. In AR rendering, the object structure is interactively aligned in one frame by the user, object and scene structure is registered, and rendered in subsequent frames by a virtual scene camera, with parameters estimated from real-time visual tracking. Using the same viewing geometry for both object acquisition, registration, and rendering ensures consistency and minimizes errors.