To create realistic and expressive virtual humans, we need to develop better models of the processes and dynamics of human emotions and expressions. A first step in this effort is...
Previous research in automatic facial expression recognition has been limited to recognition of gross expression categories (e.g., joy or anger) in posed facial behavior under wel...
Tsuyoshi Moriyama, Takeo Kanade, Jeffrey F. Cohn, ...
We examined the open issue of whether FACS action units (AUs) can be recognized more accurately by classifying local regions around the eyes, brows, and mouth compared to analyzin...
A facial analysis-synthesis framework based on a concise set of local, independently actuated, Coarticulation Regions (CR) is presented for the control of 2D animated characters. ...
—In this paper, we propose a method to model the material constants (Young’s modulus) of the skin in subregions of the face from the motion observed in multiple facial expressi...
Vasant Manohar, Matthew Shreve, Dmitry Goldgof, Su...