During face-to-face conversation, the speaker’s head is continually in motion. These movements serve a variety of important communicative functions. Our goal is to develop a model of the speaker’s head movements that can be used to generate head movements for virtual agents based on a gesture annotation corpora. In this paper, we focus on the first step of the head movement generation process: predicting when the speaker should use head nods. We describe our machine-learning approach that creates a head nod model from annotated corpora of face-to-face human interaction, relying on the linguistic features of the surface text. We also describe the feature selection process, training process, and the evaluation of the learned model with test data in detail. The result shows that the model is able to predict head nods with high precision and recall. Categories and Subject Descriptors I.2.6 [Artificial Intelligence]: Learning; I.2.11 [Distributed Artificial Intelligence]: Intelligen...