Sciweavers

46 search results - page 5 / 10
» Metrics for MT evaluation: evaluating reordering
Sort
View
COLING
2008
13 years 9 months ago
The Impact of Reference Quality on Automatic MT Evaluation
Language resource quality is crucial in NLP. Many of the resources used are derived from data created by human beings out of an NLP context, especially regarding MT and reference ...
Olivier Hamon, Djamel Mostefa
ACL
2009
13 years 5 months ago
Correlating Human and Automatic Evaluation of a German Surface Realiser
We examine correlations between native speaker judgements on automatically generated German text against automatic evaluation metrics. We look at a number of metrics from the MT a...
Aoife Cahill
LREC
2010
150views Education» more  LREC 2010»
13 years 9 months ago
A Dataset for Assessing Machine Translation Evaluation Metrics
We describe a dataset containing 16,000 translations produced by four machine translation systems and manually annotated for quality by professional translators. This dataset can ...
Lucia Specia, Nicola Cancedda, Marc Dymetman
EMNLP
2010
13 years 5 months ago
Automatic Evaluation of Translation Quality for Distant Language Pairs
Automatic evaluation of Machine Translation (MT) quality is essential to developing highquality MT systems. Various evaluation metrics have been proposed, and BLEU is now used as ...
Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhit...
ACL
2010
13 years 5 months ago
Tackling Sparse Data Issue in Machine Translation Evaluation
We illustrate and explain problems of n-grams-based machine translation (MT) metrics (e.g. BLEU) when applied to morphologically rich languages such as Czech. A novel metric SemPO...
Ondrej Bojar, Kamil Kos, David Marecek