Sciweavers

119 search results - page 5 / 24
» Better Evaluation Metrics Lead to Better Machine Translation
Sort
View
CLEF
2001
Springer
14 years 29 days ago
iCLEF 2001 at Maryland: Comparing Term-for-Term Gloss and MT
For the first interactive Cross-Language Evaluation Forum, the Maryland team focused on comparison of term-for-term gloss translation with full machine translation for the documen...
Jianqiang Wang, Douglas W. Oard
ACL
2008
13 years 10 months ago
Name Translation in Statistical Machine Translation - Learning When to Transliterate
We present a method to transliterate names in the framework of end-to-end statistical machine translation. The system is trained to learn when to transliterate. For Arabic to Engl...
Ulf Hermjakob, Kevin Knight, Hal Daumé III
AUSAI
2004
Springer
14 years 1 months ago
A Bayesian Metric for Evaluating Machine Learning Algorithms
How to assess the performance of machine learning algorithms is a problem of increasing interest and urgency as the data mining application of myriad algorithms grows. The standard...
Lucas R. Hope, Kevin B. Korb
LREC
2008
111views Education» more  LREC 2008»
13 years 10 months ago
Sensitivity of Automated MT Evaluation Metrics on Higher Quality MT Output: BLEU vs Task-Based Evaluation Methods
We report the results of an experiment to assess the ability of automated MT evaluation metrics to remain sensitive to variations in MT quality as the average quality of the compa...
Bogdan Babych, Anthony Hartley
EMNLP
2009
13 years 6 months ago
Feasibility of Human-in-the-loop Minimum Error Rate Training
Minimum error rate training (MERT) involves choosing parameter values for a machine translation (MT) system that maximize performance on a tuning set as measured by an automatic e...
Omar Zaidan, Chris Callison-Burch