Sciweavers

ACL
2008

Correlation between ROUGE and Human Evaluation of Extractive Meeting Summaries

14 years 1 months ago
Correlation between ROUGE and Human Evaluation of Extractive Meeting Summaries
Automatic summarization evaluation is critical to the development of summarization systems. While ROUGE has been shown to correlate well with human evaluation for content match in text summarization, there are many characteristics in multiparty meeting domain, which may pose potential problems to ROUGE. In this paper, we carefully examine how well the ROUGE scores correlate with human evaluation for extractive meeting summarization. Our experiments show that generally the correlation is rather low, but a significantly better correlation can be obtained by accounting for several unique meeting characteristics, such as disfluencies and speaker information, especially when evaluating system-generated summaries.
Feifan Liu, Yang Liu
Added 29 Oct 2010
Updated 29 Oct 2010
Type Conference
Year 2008
Where ACL
Authors Feifan Liu, Yang Liu
Comments (0)