We present the results of an experiment investigating the effects of a talking head's gaze behavior on the user's quality assessment of the interface. We compared a version that used life-like rules for gazing with a version that would keep its eyes fixed on the visitor most of the time, and a random version. We found significant differences between these gaze algorithms in terms of ease of use, efficiency and other quality factors. Keywords Conversational agents, gaze, evaluation.