We introduce our research on anticipatory and coordinated interaction between a virtual human and a human partner. Rather than adhering to the turn taking paradigm, we choose to investigate interaction where there is simultaneous expressive behavior by the human interlocutor and a humanoid. We have designed various applications in which we can study and specify such behavior, in particular behavior that requires synchronization based on predictions from performance and perception. We have some preliminary observations – based on the literature and in analogy with our applications – on the role of predictions in conversations. Architectural consequences for the design of virtual humans are presented.