Sciweavers

55 search results - page 6 / 11
» Evaluation Metrics for Knowledge-Based Machine Translation
Sort
View
EMNLP
2010
13 years 5 months ago
Automatic Evaluation of Translation Quality for Distant Language Pairs
Automatic evaluation of Machine Translation (MT) quality is essential to developing highquality MT systems. Various evaluation metrics have been proposed, and BLEU is now used as ...
Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhit...
MT
2010
110views more  MT 2010»
13 years 5 months ago
Metrics for MT evaluation: evaluating reordering
Translating between dissimilar languages requires an account of the use of divergent word orders when expressing the same semantic content. Reordering poses a serious problem for s...
Alexandra Birch, Miles Osborne, Phil Blunsom
EMNLP
2008
13 years 8 months ago
Decomposability of Translation Metrics for Improved Evaluation and Efficient Algorithms
BLEU is the de facto standard for evaluation and development of statistical machine translation systems. We describe three real-world situations involving comparisons between diff...
David Chiang, Steve DeNeefe, Yee Seng Chan, Hwee T...
ICTAI
2010
IEEE
13 years 5 months ago
Support Vector Methods for Sentence Level Machine Translation Evaluation
Recent work in the field of machine translation (MT) evaluation suggests that sentence level evaluation based on machine learning (ML) can outperform the standard metrics such as B...
Antoine Veillard, Elvina Melissa, Cassandra Theodo...
EMNLP
2006
13 years 8 months ago
Re-evaluating Machine Translation Results with Paraphrase Support
In this paper, we present ParaEval, an automatic evaluation framework that uses paraphrases to improve the quality of machine translation evaluations. Previous work has focused on...
Liang Zhou, Chin-Yew Lin, Eduard H. Hovy