Sciweavers

ICMLA
2008

Multimodal Music Mood Classification Using Audio and Lyrics

14 years 1 months ago
Multimodal Music Mood Classification Using Audio and Lyrics
In this paper we present a study on music mood classification using audio and lyrics information. The mood of a song is expressed by means of musical features but a relevant part also seems to be conveyed by the lyrics. We evaluate each factor independently and explore the possibility to combine both, using Natural Language Processing and Music Information Retrieval techniques. We show that standard distance-based methods and Latent Semantic Analysis are able to classify the lyrics significantly better than random, but the performance is still quite inferior to that of audio-based techniques. We then introduce a method based on differences between language models that gives performances closer to audio-based classifiers. Moreover, integrating this in a multimodal system (audio+text) allows an improvement in the overall performance. We demonstrate that lyrics and audio information are complementary, and can be combined to improve a classification system.
Cyril Laurier, Jens Grivolla, Perfecto Herrera
Added 29 Oct 2010
Updated 29 Oct 2010
Type Conference
Year 2008
Where ICMLA
Authors Cyril Laurier, Jens Grivolla, Perfecto Herrera
Comments (0)