This paper describes the integration of perceptual guidelines from human vision with an AI-based mixed-initiative search technique. The result is a visualization assistant, a system that identifies perceptually salient visualizations for large, multidimensional collections of data. Understanding how the low-level human visual system "sees" visual information in an image allows us to: (1) evaluate a particular visualization, and (2) direct the search algorithm towards new visualizations that may be better than those seen to date. In this way we can limit search to locations that have the highest potential to contain effective visualizations. One testbed application for this work is the visualization of intelligent e-commerce auction agents participating in a simulated online auction environment. We describe how the visualization assistant was used to choose methods to effectively visualize this data.
Christopher G. Healey, Robert St. Amant, Jiae Chan