Sciweavers

CVIU
2008

Multimodal person authentication using speech, face and visual speech

13 years 11 months ago
Multimodal person authentication using speech, face and visual speech
This paper presents a method for automatic multimodal person authentication using speech, face and visual speech modalities. The proposed method uses the motion information to localize the face region, and the face region is processed in YCrCb color space to determine the locations of the eyes. The system models the nonlip region of the face using a Gaussian distribution, and it is used to estimate the center of the mouth. Facial and visual speech features are extracted using multiscale morphological erosion and dilation operations, respectively. The facial features are extracted relative to the locations of the eyes, and visual speech features are extracted relative to the locations of the eyes and mouth. Acoustic features are derived from the speech signal, and are represented by weighted linear prediction cepstral coefficients (WLPCC). Autoassociative neural network (AANN) models are used to capture the distribution of the extracted acoustic, facial and visual speech features. The ...
S. Palanivel, B. Yegnanarayana
Added 10 Dec 2010
Updated 10 Dec 2010
Type Journal
Year 2008
Where CVIU
Authors S. Palanivel, B. Yegnanarayana
Comments (0)