We present a novel algorithm that takes as input an uncalibrated unordered set of spherical panoramic images and outputs their relative pose up to a global scale. The panoramas contain both indoor and outdoor shots and each set was taken in a particular indoor location e.g. a bakery or a restaurant. The estimated pose is used to build a map of the location, and allow easy visual navigation and exploration in the spirit of Google’s Street View. We also present a dataset of 9 sets of panoramas, together with an annotation tool and ground truth point correspondences. The manual annotations were used to obtain ground truth relative pose, and to quantitatively evaluate the different parameters of our algorithm, and can be used to benchmark different approaches. We show excellent results on the dataset and point out future work.