When we look at a face, we readily perceive that person's gender, expression, identity, age, and attractiveness. Perceivers as well as scientists have hitherto had little success in articulating just what information we are employing to achieve these subjectively immediate and effortless classifications. We describe here a method that estimates that information. Observers classified faces in high levels of visual noise as male or female (in a gender task), happy/unhappy (in an expression task), or Tom Cruise/John Travolta (in an individuation task). They were unaware that the underlying face (which was midway between each of the classes) was identical throughout a task, with only the noise rendering it more like one category value or the other. The difference between the average of noise patterns for each classification decision provided a linear estimate of the information mediating these classifications. When the noise was combined with the underlying face, the resultant images...
Michael C. Mangini, Irving Biederma