This paper deals with automatically learning the spatial distribution of a set of images. That is, given a sequence of images acquired from well-separated locations, how can they be arranged to best explain their genesis? The solution to this problem can be viewed as an instance of robot mapping although it can also be used in other contexts. We examine the problem where only limited prior odometric information is available, employing a feature-based method derived from a probabilistic pose estimation framework. Initially, a set of visual features is selected from the images and correspondences are found across the ensemble. The images are then localized by first assembling the small subset of images for which odometric confidence is high, and sequentially inserting the remaining images, localizing each against the previous estimates, and taking advantage of any priors that are available. We present experimental results validating the approach, and demonstrating metrically and topolog...