We present a method for testing subject’s performance in a realistic (end-to-end) information understanding task— rapid understanding of large document collections—and discuss lessons learned from our attempts to measure representative information-understanding tools and behaviors. To further our understanding of this task, we need to move beyond overly constrained and artificial measurements of easily instrumented behavior. From observations, we know information analysis is often performed under time pressure and requiring use of large document collections. Instrumenting people in their workplace is often untenable, yet oversimple laboratory studies often miss explanatory richness. We argue that studies of information analysts need to be done on tests that are closely aligned with their natural tasks. Understanding human performance in such tasks requires analysis that accounts for many of the subtle factors that influence final performance, including the role of background kno...
Malcolm Slaney, Daniel M. Russell