Location-based context is important for many applications. Previous systems offered only coarse room-level features or used manually specified room regions to determine fine-scale features. We propose a location context mechanism based on activity maps, which define regions of similar context based on observations of 3-D patterns of location and motion in an environment. We describe an algorithm for obtaining activity maps using the spatio-temporal clustering of visual tracking data. We show how the recovered maps correspond to regions for common tasks in the environment and describe their use in some applications.