This paper presents a spatial-semantic modeling system featuring automated learning of object-place relations from an online annotated database, and the application of these relations to a variety of real-world tasks. The system is able to label novel scenes with place information, as we demonstrate on test scenes drawn from the same source as our training set. We have designed our system for future enhancement of a robot platform that performs state-of-the-art object recognition and creates object maps of realistic environments. In this context, we demonstrate the use of spatial-semantic information to perform clustering and place labeling of object maps obtained from real homes. This place information is fed back into the robot system to inform an object search planner about likely locations of a query object. As a whole, this system represents a new level in spatial reasoning and semantic understanding for a physical platform.