This paper identifies several issues in multimodal dialogues between a companion robot and a human user. Specifically, these issues pertain to the synchronization of multimodal input and output, and the handling of expected and unexpected input, including input contradicting over different modalities. Furthermore, a novel way of visually representing multimodal dialogues is presented. Ultimately, this work represents some steps towards the development of a principled and generic method for programming multimodal dialogues.
Nieske L. Vergunst, Bas R. Steunebrink, Mehdi Dast