Past research on automatic laughter detection has focused mainly on audio-based detection. Here we present an audiovisual approach to distinguishing laughter from speech and we show that integrating the information from audio and video leads to an improved reliability of audiovisual approach in comparison to single-modal approaches. We also investigated the level at which audiovisual information should be fused for the best performance. When tested on 96 audiovisual sequences depicting spontaneously displayed (as opposed to posed) laughter and speech episodes, the proposed audiovisual feature-level approach achieved a 86.9% recall rate with 76.7% precision.