This paper introduces the Multimodal Multi-view Integrated Database (MMID), which holds human activities in presentation situations. MMID contains audio, video, human body motions...
Yuichi Nakamura, Yoshifumi Kimura, Y. Yu, Yuichi O...
The performance of a local feature based system, using Gabor-filters, and a global template matching based system, using a combination of PCA (Principal Component Analysis) and LD...
A method for extracting information about facial expressions from images is presented. Facial expression images are coded using a multi-orientation, multi-resolution set of Gabor ...
Michael J. Lyons, Shigeru Akamatsu, Miyuki Kamachi...
In this paper we describe a prototype system for directing a computer generated scene for film planning. The system is based upon the concepts of the Intuitive Interface, an envir...
We describe a virtual mirror interface which can react to people using robust, real-time face tracking. Our display can directly combine a user's face with various graphical ...
Trevor Darrell, Gaile G. Gordon, John Woodfill, Mi...
Current approaches to automated analysis have focused on a small set of prototypic expressions (e.g., joy or anger). Prototypic expressions occur infrequently in everyday life, ho...
Jeffrey F. Cohn, Adena J. Zlochower, James Jenn-Ji...
This paper addresses our proposed method to automatically locate the person's face from a given image that consists of a head-and-shoulders view of the person and a complex b...
This paper reviews characteristics of human face recognition that should be reflected in any psychologically plausible computational model of face recognition. We then summarise r...