We introduce a set of statistical geometric tools designed to identify the objects being manipulated through speech and gesture in a multimodal augmented reality system. SenseShapes are volumetric regions of interest that can be attached to parts of the user’s body to provide valuable information about the user’s interaction with objects. To assist in object selection, we generate a rich set of statistical data and dynamically choose which data to consider based on the current situation.