Sciweavers

942 search results - page 14 / 189
» Comparing Automatic and Human Evaluation of NLG Systems
Sort
View
CICLING
2005
Springer
13 years 10 months ago
Evaluating Evaluation Methods for Generation in the Presence of Variation
Recent years have seen increasing interest in automatic metrics for the evaluation of generation systems. When a system can generate syntactic variation, automatic evaluation becom...
Amanda Stent, Matthew Marge, Mohit Singhai
AIA
2007
13 years 9 months ago
Improving extractive dialogue summarization by utilizing human feedback
Automatic summarization systems usually are trained and evaluated in a particular domain with fixed data sets. When such a system is to be applied to slightly different input, la...
Margot Mieskes, Christoph Müller, Michael Str...
MM
2006
ACM
151views Multimedia» more  MM 2006»
14 years 2 months ago
Choreographic buttons: promoting social interaction through human movement and clear affordances
We used human movement as the basis for designing a collaborative aesthetic design environment. Our intention was to promote social interaction and creative expression. We employe...
Andrew Webb, Andruid Kerne, Eunyee Koh, Pranesh Jo...
NAACL
2010
13 years 6 months ago
Predicting Human-Targeted Translation Edit Rate via Untrained Human Annotators
In the field of machine translation, automatic metrics have proven quite valuable in system development for tracking progress and measuring the impact of incremental changes. Howe...
Omar Zaidan, Chris Callison-Burch
ACL
2001
13 years 9 months ago
A Machine Learning Approach to the Automatic Evaluation of Machine Translation
We present a machine learning approach to evaluating the wellformedness of output of a machine translation system, using classifiers that learn to distinguish human reference tran...
Simon Corston-Oliver, Michael Gamon, Chris Brocket...