In the recent years, photo context metadata (e.g., date, GPS coordinates) have been proved to be useful in the management of personal photos. However, these metadata are still poorly considered in photo retrieving systems. In order to overcome this limitation, we propose an approach to incorporate contextual metadata, in a keyword-based photo retrieval process. We use metadata about the photo shot context (address location, nearby objects, season, light status…) to generate a bag of words for indexing each photo. We extend the Vector Space Model in order to transform these shot context words into document-vector terms. In addition, spatial reasoning and geographical ontologies are used to infer new indexing terms. This facilitates the query-document matching process and also allows performing semantic comparison between the query terms and photo annotations.