The SenseCam is a wearable camera that automatically takes photos of the wearer’s activities, generating thousands of images per day. Automatically organising these images for efficient search and retrieval is a challenging task, but can be simplified by providing semantic information with each photo, such as the wearer’s location during capture time. We propose a method for automatically determining the wearer’s location using an annotated image database, described using SURF interest point descriptors. We show that SURF out-performs SIFT in matching SenseCam images and that matching can be done efficiently using hierarchical trees of SURF descriptors. Additionally, by re-ranking the top images using bi-directional SURF matches, location matching performance is improved further.
Ciarán O. Conaire, Michael Blighe, Noel E.