In pursuing the ultimate goal of enabling intelligent conversation with a virtual human, two key challenges are selecting nonverbal behaviors to implement and realizing those behaviors practically and reliably. In this paper, we explore the signals interlocutors use to display uncertainty face to face. Peoples’ signals were identified and annotated through systematic coding and then implemented onto our ECA (Embodied Conversational Agent), RUTH. We investigated whether RUTH animations were as effective as videos of talking people in conveying an agent’s level of uncertainty to human viewers. Our results show that people could pick up on different levels of uncertainty not only with another conversational partner, but also with the simulations on RUTH. In addition, we used animations containing different subsets of facial signals to understand in more detail how nonverbal behavior conveys uncertainty. The findings illustrate the promise of our methodology for creating specific inven...