Research has shown that the dynamics of facial motion are important in the perception of gender, identity, and emotion. In this paper we show that it is possible to use a multilinear tensor framework to extract facial motion signatures and to cluster these signatures by gender or by emotion. Here we consider only the dynamics of internal features of the face (e.g. eyebrows, eyelids and mouth) so as to remove structural and shape cues to identity and gender. Such structural gender biases include jaw width and forehead shape and their removal ensures dynamic cues alone are being used. Additionally, we demonstrate the generative capabilities of using a tensor framework, by consistently synthesising new motion signatures.
Lisa Gralewski, Neill W. Campbell, Ian Penton-Voak