Sciweavers

AIA
2007

Improving extractive dialogue summarization by utilizing human feedback

14 years 27 days ago
Improving extractive dialogue summarization by utilizing human feedback
Automatic summarization systems usually are trained and evaluated in a particular domain with fixed data sets. When such a system is to be applied to slightly different input, labor- and cost-intensive annotations have to be created to retrain the system. We deal with this problem by providing users with a GUI which allows them to correct automatically produced imperfect summaries. The corrected summary in turn is added to the pool of training data. The performance of the system is expected to improve as it adapts to the new domain. KEY WORDS Multi-Party Dialogues, Automatic Summarization, GUI, Feedback, Learning
Margot Mieskes, Christoph Müller, Michael Str
Added 29 Oct 2010
Updated 29 Oct 2010
Type Conference
Year 2007
Where AIA
Authors Margot Mieskes, Christoph Müller, Michael Strube
Comments (0)