Advancements in sensor technologies have made it easier and increasingly common to capture information using multiple media. This is especially true for personal multimedia information. Effective assimilation of such information requires recognizing the semantic correlations between media and the ability to model and interact with them in a unified manner. This paper presents our research in designing capabilities to support user-data interactions in context of the aforementioned issues. Central to our approach is characterization and modeling of media using the notion of an “event”. Building on this idea, we propose the design of operations as well as visualizations that not only allow event generation and manipulation, but also the ability to interact with various semantically important characteristics of the underlying information. Experimental and comparative evaluations demonstrate the efficacy and promise of the approach.