Curtin University’s Talking Heads (TH) combine an MPEG-4 compliant Facial Animation Engine (FAE), an Text To Emotional Speech Synthesiser (TTES), a multi-modal Dialogue Manager (DM), that accesses a Knowledge Base (KB) and outputs Virtual Human Markup Language (VHML) text which drives the TTS and FAE. A user enters a question and an animated TH responds with a believable and affective voice and actions. However, this response to the user is normally marked up in VHML by the KB developer to produce the required facial gestures and emotional display. A real person does not react by fixed rules but on personality, beliefs, good and bad previous experiences, and training. This paper reviews personality theories and models relevant to THs, and then discusses the research at Curtin University over the last five years in implementing and evaluating personality models. Finally the paper proposes an active, adaptive personality model to unify that work.
He Xiao, Donald Reid, Andrew Marriott, E. K. Gulla