Research has shown that the dynamics of facial motion are important in the perception of gender, identity, and emotion. In this paper we show that it is possible to use a multi-linear tensor framework to extract facial motion signatures and to cluster these signatures by gender or by emotion. Here, we consider only the dynamics of internal features of the face (e.g. eyebrows, eyelids and mouth) so as to remove structural and shape cues to identity and gender. Such structural gender biases include jaw width and forehead shape and their removal ensures dynamic cues alone are being used. Additionally, we demonstrate the generative capabilities of using a tensor framework, by reliably synthesising new motion signatures; and find results comparable to human psychology experiments performed on the same facial motion data.
Lisa Gralewski, Neill W. Campbell, Edward Morrison